Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Maximum energy yield approach for CPV tracker design
Aldaiturriaga, E.; González, O.; Castro, M.
2012-10-01
Foton HC Systems has developed a new CPV tracker model, specially focused on its tracking efficiency and the effect of the tracker control techniques on the final energy yield of the system. This paper presents the theoretical work carried out into determining the energy yield for a CPV system, and illustrates the steps involved in calculating and understanding how energy consumption for tracking is opposed to tracker pointing errors. Additionally, the expressions to compute the optimum parameters are presented and discussed.
Setting maximum sustainable yield targets when yield of one species affects that of other species
Rindorf, Anna; Reid, David; Mackinson, Steve;
2012-01-01
species. But how should we prioritize and identify most appropriate targets? Do we prefer to maximize by focusing on total yield in biomass across species, or are other measures targeting maximization of profits or preserving high living qualities more relevant? And how do we ensure that targets remain......, industry, managers, and NGO representatives. The workshop was designed to identify variants of maximum sustainable yield (MSY) which account for the necessary trade‐offs and estimate the preferences of the workshop participants for each of these variants across five regional groups: the Baltic Sea...
Setting maximum sustainable yield targets when yield of one species affects that of other species
Rindorf, Anna; Reid, David; Mackinson, Steve
2012-01-01
species. But how should we prioritize and identify most appropriate targets? Do we prefer to maximize by focusing on total yield in biomass across species, or are other measures targeting maximization of profits or preserving high living qualities more relevant? And how do we ensure that targets remain......, industry, managers, and NGO representatives. The workshop was designed to identify variants of maximum sustainable yield (MSY) which account for the necessary trade‐offs and estimate the preferences of the workshop participants for each of these variants across five regional groups: the Baltic Sea...
Maximum photosynthetic yield of green microalgae in photobioreactors.
Zijffers, Jan-Willem F; Schippers, Klaske J; Zheng, Ke; Janssen, Marcel; Tramper, Johannes; Wijffels, René H
2010-11-01
The biomass yield on light energy of Dunaliella tertiolecta and Chlorella sorokiniana was investigated in a 1.25- and 2.15-cm light path panel photobioreactor at constant ingoing photon flux density (930 µmol photons m⁻² s⁻¹). At the optimal combination of biomass density and dilution rate, equal biomass yields on light energy were observed for both light paths for both microalgae. The observed biomass yield on light energy appeared to be based on a constant intrinsic biomass yield and a constant maintenance energy requirement per gram biomass. Using the model of Pirt (New Phytol 102:3-37, 1986), a biomass yield on light energy of 0.78 and 0.75 g mol photons⁻¹ and a maintenance requirement of 0.0133 and 0.0068 mol photons g⁻¹ h⁻¹ were found for D. tertiolecta and C. sorokiniana, respectively. The observed yield decreases steeply at low light supply rates, and according to this model, this is related to the increase of the amount of useable light energy diverted to biomass maintenance. With this study, we demonstrated that the observed biomass yield on light in short light path bioreactors at high biomass densities decreases because maintenance requirements are relatively high at these conditions. All our experimental data for the two strains tested could be described by the physiological models of Pirt (New Phytol 102:3-37, 1986). Consequently, for the design of a photobioreactor, we should maintain a relatively high specific light supply rate. A process with high biomass densities and high yields at high light intensities can only be obtained in short light path photobioreactors.
Network Decomposition and Maximum Independent Set Part Ⅰ: Theoretic Basis
朱松年; 朱嫱
2003-01-01
The structure and characteristics of a connected network are analyzed, and a special kind of sub-network, which can optimize the iteration processes, is discovered. Then, the sufficient and necessary conditions for obtaining the maximum independent set are deduced. It is found that the neighborhood of this sub-network possesses the similar characters, but both can never be allowed incorporated together. Particularly, it is identified that the network can be divided into two parts by a certain style, and then both of them can be transformed into a pair sets network, where the special sub-networks and their neighborhoods appear alternately distributed throughout the entire pair sets network. By use of this characteristic, the network decomposed enough without losing any solutions is obtained. All of these above will be able to make well ready for developing a much better algorithm with polynomial time bound for an odd network in the the application research part of this subject.
A Realization of Theoretical Maximum Performance in IPSec on Gigabit Ethernet
Onuki, Atsushi; Takeuchi, Kiyofumi; Inada, Toru; Tokiniwa, Yasuhisa; Ushirozawa, Shinobu
This paper describes “IPSec(IP Security) VPN system" and how it attains a theoretical maximum performance on Gigabit Ethernet. The Conventional System is implemented by software. However, the system has several bottlenecks which must be overcome to realize a theoretical maximum performance on Gigabit Ethernet. Thus, we newly propose IPSec VPN System with the FPGA(Field Programmable Gate Array) based hardware architecture, which transmits a packet by the pipe-lined flow processing and has 6 parallel structure of encryption and authentication engines. We show that our system attains the theoretical maximum performance in the short packet which is difficult to realize until now.
Wilson, Douglas Carl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Loomis, Eric Nicholas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-17
We are anticipating our first NIF double shell shot using an aluminum ablator and a glass inner shell filled with deuterium shown in figure 1. The expected yield is between a few 10^{10} to a few 10^{11} dd neutrons. The maximum credible yield is 5e+13. This memo describes why, and what would be expected with variations on the target. This memo evaluates the maximum credible yield for deuterium filled double shell capsule targets with an aluminum ablator shell and a glass inner shell in yield Category A (< 10^{14} neutrons). It also pertains to fills of gas diluted with hydrogen, helium (^{3}He or ^{4}He), or any other fuel except tritium. This memo does not apply to lower z ablator dopants, such as beryllium, as this would increase the ablation efficiency. This evaluation is for 5.75 scale hohlraum targets of either gold or uranium with helium gas fills with density between 0 and 1.6 mg/cc. It could be extended to other hohlraum sizes and shapes with slight modifications. At present only laser pulse energies up to 1.5 MJ were considered with a single step laser pulse of arbitrary shape. Since yield decreases with laser energy for this target, the memo could be extended to higher laser energies if desired. These maximum laser parameters of pulses addressed here are near the edge of NIF’s capability, and constitute the operating envelope for experiments covered by this memo. We have not considered multiple step pulses, would probably create no advantages in performance, and are not planned for double shell capsules. The main target variables are summarized in Table 1 and explained in detail in the memo. Predicted neutron yields are based on 1D and 2D clean simulations.
MedhatAbd El Barr
2016-01-01
Objective: To evaluate exploitation status of the stocks of demersal fishes in Omani artisanal fisheries. Methods: Time-series data between 2005 and 2014 on catches and effort represented by the number of fishing boats were used to estimate catch per unit effort and maximum sustainable yields applying Schaefer surplus production model. Regression analyses were made online using GraphPad software. Results: The study revealed that increasing the number of boats on the fishery caused a decrease of catch per unit effort of some species. Maximum sustainable yields and exploitation status were estimated for these species applying. Conclusions: Some demersal fish species were found to be caught in quantities exceeding maximum sustainable yields during some fishing seasons indicating overexploitation of their stocks.
Verification and evaluation of the ＂Three Optimums Theory＂ in rice breeding for maximum yield
YANGShouren; ZHANGLongbu; CHENWenfu; XUZhengjin; WANGJinmin
1994-01-01
Rice breeding for maximum yield is a hot topic today in the rice community of the world, and a hard out to crack into the bargain. For many years, we have been devoted to the subject. In 1987 we discussed the subject in publications at home and abroad,
Manufactering of par-fried french-fries. Part 3: a blueprint to predict the maximum production yield
Somsen, D.J.; Capelle, A.; Tramper, J.
2004-01-01
Very little research on the production yield of par-fried French-fries has been reported in the literature. This paper bridges the knowledge gap and outlines the development of a model to predict the maximum production yield of par-fried French-fries. This yield model can be used to calculate the yi
Ulrich, Clara; Vermard, Youen; Dolder, Paul J.
2017-01-01
. An objective method is suggested that provides an optimal set of fishing mortality within the range, minimizing the risk of total allowable catch mismatches among stocks captured within mixed fisheries, and addressing explicitly the trade-offs between the most and least productive stocks.......Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative...... ranges to combine long-term single-stock targets with flexible, short-term, mixed-fisheries management requirements applied to the main North Sea demersal stocks. It is shown that sustained fishing at the upper bound of the range may lead to unacceptable risks when technical interactions occur...
Quota implementation of the maximum sustainable yield for age-structured fisheries.
Kanik, Zafer; Kucuksenel, Serkan
2016-06-01
One of the main goals stated in the proposals for the Common Fisheries Policy (CFP) reform was achieving maximum sustainable yield (MSY) for all European fisheries. In this paper, we propose a fishing rights allocation mechanism or management system, which specifies catch limits for individual fishing fleets to implement MSY harvesting conditions in an age-structured bioeconomic model. An age-structured model in a single species fishery with two fleets having perfect or imperfect fishing selectivity is studied. If fishing technology or gear selectivity depends on the relative age composition of the mature fish stock, fixed harvest proportions, derived from catchability and bycatch coefficients, is not valid anymore. As a result, not only the age-structure and fishing technology but also the estimated level of MSY is steering the allocation of quota shares. The results also show that allocation of quota shares based on historical catches or auctioning may not provide viable solutions to achieve MSY.
Osterloh, Frank E
2014-10-02
The Shockley-Queisser analysis provides a theoretical limit for the maximum energy conversion efficiency of single junction photovoltaic cells. But besides the semiconductor bandgap no other semiconductor properties are considered in the analysis. Here, we show that the maximum conversion efficiency is limited further by the excited state entropy of the semiconductors. The entropy loss can be estimated with the modified Sackur-Tetrode equation as a function of the curvature of the bands, the degeneracy of states near the band edges, the illumination intensity, the temperature, and the band gap. The application of the second law of thermodynamics to semiconductors provides a simple explanation for the observed high performance of group IV, III-V, and II-VI materials with strong covalent bonding and for the lower efficiency of transition metal oxides containing weakly interacting metal d orbitals. The model also predicts efficient energy conversion with quantum confined and molecular structures in the presence of a light harvesting mechanism.
Maziero, G C; Baunwart, C; Toledo, M C
2001-05-01
The theoretical maximum daily intakes (TMDI) of the phenolic antioxidants butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and tertbutyl hydroquinone (TBHQ) in Brazil were estimated using food consumption data derived from a household economic survey and a packaged goods market survey. The estimates were based on maximum levels of use of the food additives specified in national food standards. The calculated intakes of the three additives for the mean consumer were below the ADIs. Estimates of TMDI for BHA, BHT and TBHQ ranged from 0.09 to 0.15, 0.05 to 0.10 and 0.07 to 0.12 mg/kg of body weight, respectively. To check if the additives are actually used at their maximum authorized levels, analytical determinations of these compounds in selected food categories were carried out using HPLC with UV detection. BHT and TBHQ concentrations in foodstuffs considered to be representive sources of these antioxidants in the diet were below the respective maximum permitted levels. BHA was not detected in any of the analysed samples. Based on the maximal approach and on the analytical data, it is unlikely that the current ADI of BHA (0.5 mg/kg body weight), BHT (0.3 mg/kg body weight) and TBHQ (0.7 mg/kg body weight) will be exceeded in practice by the average Brazilian consumer.
Theoretical Evaluation of the Maximum Work of Free-Piston Engine Generators
Kojima, Shinji
2017-01-01
Utilizing the adjoint equations that originate from the calculus of variations, we have calculated the maximum thermal efficiency that is theoretically attainable by free-piston engine generators considering the work loss due to friction and Joule heat. Based on the adjoint equations with seven dimensionless parameters, the trajectory of the piston, the histories of the electric current, the work done, and the two kinds of losses have been derived in analytic forms. Using these we have conducted parametric studies for the optimized Otto and Brayton cycles. The smallness of the pressure ratio of the Brayton cycle makes the net work done negative even when the duration of heat addition is optimized to give the maximum amount of heat addition. For the Otto cycle, the net work done is positive, and both types of losses relative to the gross work done become smaller with the larger compression ratio. Another remarkable feature of the optimized Brayton cycle is that the piston trajectory of the heat addition/disposal process is expressed by the same equation as that of an adiabatic process. The maximum thermal efficiency of any combination of isochoric and isobaric heat addition/disposal processes, such as the Sabathe cycle, may be deduced by applying the methods described here.
Optimum poultry litter rates for maximum profit vs. yield in cotton production
Cotton lint yield responds well to increasing rates of poultry litter fertilization, but little is known of how optimum rates for yield compare with optimum rates for profit. The objectives of this study were to analyze cotton lint yield response to poultry litter application rates, determine and co...
Abas, Lindy; Luschnig, Christian
2010-06-15
Isolation of a microsomal membrane fraction is a common procedure in studies involving membrane proteins. By conventional definition, microsomal membranes are collected by centrifugation of a postmitochondrial fraction at 100,000g in an ultracentrifuge, a method originally developed for large amounts of mammalian tissue. We present a method for isolating microsomal-type membranes from small amounts of Arabidopsis thaliana plant material that does not rely on ultracentrifugation but instead uses the lower relative centrifugal force (21,000g) of a microcentrifuge. We show that the 21,000g pellet is equivalent to that obtained at 100,000g and that it contains all of the membrane fractions expected in a conventional microsomal fraction. Our method incorporates specific manipulation of sample density throughout the procedure, with minimal preclearance, minimal volumes of extraction buffer, and minimal sedimentation pathlength. These features allow maximal membrane yields, enabling membrane isolation from limited amounts of material. We further demonstrate that conventional ultracentrifuge-based protocols give submaximal yields due to losses during early stages of the procedure; that is, extensive amounts of microsomal-type membranes can sediment prematurely during the typical preclearance steps. Our protocol avoids such losses, thereby ensuring maximal yield and a representative total membrane fraction. The principles of our method can be adapted for nonplant material.
Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro
2017-10-01
The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r→Z transform.
Liarte, Danilo B; Transtrum, Mark K; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P
2016-01-01
We review our work on theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces. These limits are of key relevance to current and future accelerating cavities, especially those made of new higher-$T_c$ materials such as Nb$_3$Sn, NbN, and MgB$_2$. We summarize our calculations of the so-called superheating field $H_{\\mathrm{sh}}$, beyond which flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and disorder. Will we need to control surface orientation in the layered compound MgB$_2$? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. T...
J. G. Dyke; Kleidon, A.
2010-01-01
The Maximum Entropy Production (MEP) principle has been remarkably successful in producing accurate predictions for non-equilibrium states. We argue that this is because the MEP principle is an effective inference procedure that produces the best predictions from the available information. Since all Earth system processes are subject to the conservation of energy, mass and momentum, we argue that in practical terms the MEP principle should be applied to Earth system processes in terms of the ...
Liarte, Danilo B.; Posen, Sam; Transtrum, Mark K.; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P.
2017-03-01
Theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces are of key relevance to current and future accelerating cavities, especially those made of new higher-T c materials such as Nb3Sn, NbN, and MgB2. Indeed, beyond the so-called superheating field {H}{sh}, flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We present intuitive arguments and simple estimates for {H}{sh}, and combine them with our previous rigorous calculations, which we summarize. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and the danger of disorder in nucleating vortex entry. Will we need to control surface orientation in the layered compound MgB2? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. Flux entering a laminate can lead to so-called pancake vortices; we consider the physics of the dislocation motion and potential re-annihilation or stabilization of these vortices after their entry.
Kanchan M Samant; Santosh K Haram; Sudhir Kapoor
2007-01-01
This paper describes an effect of flow rate, carrier gas (H2, N2 and Ar) composition, and amount of benzene on the quality and the yield of carbon nanotubes (CNTs) formed by catalytical vapour decomposition (CVD) method. The flow and mass control of gases and precursor vapors respectively were found to be interdependent and therefore crucial in deciding the quality and yield of CNTs. We have achieved this by modified soap bubble flowmeter, which controlled the flow rates of two gases, simultaneously. With the help of this set-up, CNTs could be prepared in any common laboratory. Raman spectroscopy indicated the possibilities of formation of single-walled carbon nanotubes (SWNTs). From scanning electron microscopy (SEM) measurements, an average diameter of the tube/bundle was estimated to be about 70 nm. The elemental analysis using energy dispersion spectrum (EDS) suggested 96 at.wt.% carbon along with ca. 4 at.wt. % iron in the as-prepared sample. Maximum yield and best quality CNTs were obtained using H2 as the carrier gas.
Theoretical considerations on maximum running speeds for large and small animals.
Fuentes, Mauricio A
2016-02-01
Mechanical equations for fast running speeds are presented and analyzed. One of the equations and its associated model predict that animals tend to experience larger mechanical stresses in their limbs (muscles, tendons and bones) as a result of larger stride lengths, suggesting a structural restriction entailing the existence of an absolute maximum possible stride length. The consequence for big animals is that an increasingly larger body mass implies decreasing maximal speeds, given that the stride frequency generally decreases for increasingly larger animals. Another restriction, acting on small animals, is discussed only in preliminary terms, but it seems safe to assume from previous studies that for a given range of body masses of small animals, those which are bigger are faster. The difference between speed scaling trends for large and small animals implies the existence of a range of intermediate body masses corresponding to the fastest animals.
Cushing, Scott K; Bristow, Alan D; Wu, Nianqiang
2015-11-28
Plasmonics can enhance solar energy conversion in semiconductors by light trapping, hot electron transfer, and plasmon-induced resonance energy transfer (PIRET). The multifaceted response of the plasmon and multiple interaction pathways with the semiconductor makes optimization challenging, hindering design of efficient plasmonic architectures. Therefore, in this paper we use a density matrix model to capture the interplay between scattering, hot electrons, and dipole-dipole coupling through the plasmon's dephasing, including both the coherent and incoherent dynamics necessary for interactions on the plasmon's timescale. The model is extended to Shockley-Queisser limit calculations for both photovoltaics and solar-to-chemical conversion, revealing the optimal application of each enhancement mechanism based on plasmon energy, semiconductor energy, and plasmon dephasing. The results guide application of plasmonic solar-energy harvesting, showing which enhancement mechanism is most appropriate for a given semiconductor's weakness, and what nanostructures can achieve the maximum enhancement.
Theoretical study of the seasonal behavior of the global ionosphere at solar maximum
Sojka, J. J.; Schunk, R. W.
1989-01-01
The seasonal behavior of the global ionosphere was studied using a time-dependent three-dimensional physical model (developed by Shunk and his coworkers) of the ionosphere at altitudes between 120 and 800 km. This model accounts for field-aligned diffusion, cross-field electrodynamic drifts both the equatorial region and at high latitudes, interhemispheric flow, thermospheric winds, polar wind escape, energy-dependent chemical reactions, neutral composition changes, ion production due to solar EUV radiation and auroral precipitation, thermal conduction, diffusion-thermal heat flow, and local heating and cooling processes. The model studies were carried out for both June and December solstice conditions at solar maximum and for low geomagnetic activity. The ionospheric features predicted by the model agreed qualitatively with the available measurements.
In vitro metabolic engineering of hydrogen production at theoretical yield from sucrose.
Myung, Suwan; Rollin, Joseph; You, Chun; Sun, Fangfang; Chandrayan, Sanjeev; Adams, Michael W W; Zhang, Y-H Percival
2014-07-01
Hydrogen is one of the most important industrial chemicals and will be arguably the best fuel in the future. Hydrogen production from less costly renewable sugars can provide affordable hydrogen, decrease reliance on fossil fuels, and achieve nearly zero net greenhouse gas emissions, but current chemical and biological means suffer from low hydrogen yields and/or severe reaction conditions. An in vitro synthetic enzymatic pathway comprised of 15 enzymes was designed to split water powered by sucrose to hydrogen. Hydrogen and carbon dioxide were spontaneously generated from sucrose or glucose and water mediated by enzyme cocktails containing up to 15 enzymes under mild reaction conditions (i.e. 37°C and atm). In a batch reaction, the hydrogen yield was 23.2mol of dihydrogen per mole of sucrose, i.e., 96.7% of the theoretical yield (i.e., 12 dihydrogen per hexose). In a fed-batch reaction, increasing substrate concentration led to 3.3-fold enhancement in reaction rate to 9.74mmol of H2/L/h. These proof-of-concept results suggest that catabolic water splitting powered by sugars catalyzed by enzyme cocktails could be an appealing green hydrogen production approach.
Аbоut a theoretical yield of glucose from starch
V. V. Ananskikh
2016-01-01
Full Text Available Starch is the raw materials for production of crystal food glucose. With at enzyme conversion of the high purity starch, it is possible to receive glucosic syrups of a glucose equivalent (GE 98%, where there is about 95% glucose and maltose and maltotriose – of about 5%. Starch hydrolysis is carried out with a gain of solids. Thus, 100 kg of amylum is possible to give up to 109.81 kg of glucose syrup on dry basis. Taking in account the losses at manufacture steps a yield can decrease to 105.61 kg. The purified glucose syrup is concentrated up to 73–75% of dry matters and goes to a crystallization step. Crystallization of glucose is carried out in a supersaturated solution within 56–70 hours at reduced temperature from 46–48 °C to 24–26 °C, resulting a mixture of glucose crystals and an intercrystal run-off syrup called a massecuite. The crystallization process is stopped when a 50% of crystals content in massecuite is reached. At the same time glucose yield will be 105.61/2 = 52.8%. Crystallization is carried out according to the single-stage scheme, with partial return of the end product – hydrol into the hydrolised syrup. Then the massecuite is sent to a centrifugation step for dividing glucose crystals and a run-off syrup, which is partially returned to the initial syrup to reduce in GE. The second part of the run-off syrup goes to realization. It must be kept in mind: the higher GE of the glucose syrup sent to a crystallization step, the more quantity of a hydrol is possible to be returned to hydrolysed syrup. Therefore, it is in a resulted a higher yield of glucose crystals. On the basis of the carried-out calculations the computer program was made with which it is possible to define a theoretical glucose and a hydrol yield, while changing values of a hydrolysed syrup. The higher GE values of a hydrolysed syrup are the higher yield of crystal glucose and the lower one of hydrol are. So, at 98% GE of a hydrolysed syrup it is
Kutty, M.K.; Qasim, S.Z.
Theoretical yield values of Cynoglossus macrolepidotus were computed from a simple Beverton and Holt type model using informations on growth and mortality rates The effects of various fishing mortality rates (F) and ages of exploitation (Tp...
Äystö J.
2012-02-01
Full Text Available A new method to measure the fission product independent yields employing the ion guide technique and a Penning trap as a precision mass filter, which allows an unambiguous identification of the nuclides is presented. The method was used to determine the independent yields in the proton-induced fission of 232Th and 238U at 25 MeV. The data were analyzed with the consistent model for description of the fission product formation cross section at the projectile energies up to 100 MeV. Pre-compound nucleon emission is described with the two-component exciton model using Monte Carlo method. Decay of excited compound nuclei is treated within time-dependent statistical model with inclusion of the nuclear friction effect. The charge distribution of the primary fragment isobaric chain was considered as a result of frozen quantal fluctuations of the isovector nuclear density. The theoretical predictions of the independent fission product cross sections are used for normalization of the measured fission product isotopic distributions.
Michael D. Hare
2015-01-01
Full Text Available A field trial in Northeast Thailand during 2011–2012 compared the effects of nitrogen fertilizer, applied as urea in the wet season, on the growth and quality of Panicum maximum cvv. Mombasa and Tanzania. In the establishment year, increasing rates of nitrogen (0, 20, 40 and 60 kg N/ha every 40-45 days (0–180 kg N/ha for growing period progressively increased stem, leaf and total DM production (P<0.05. At higher rates (80 and 100 kg N/ha or 240–300 kg N/ha for growing period, only total DM increased at the highest rate. In the second year, a rate of 20 kg N/ha every 40-45 days (80 kg N/ha for growing season doubled the amount of DM compared with no nitrogen, and 80 kg N/ha every 40-45 days (320 kg N/ha for growing period produced significantly higher stem, leaf and total DM yields than most other rates. The yield response (kg DM/kg N decreased linearly (24.7 to 20.3 in 2011; 56.7 to 15.1 in 2012 from the lowest to the highest rate of nitrogen. In both years, increasing rates of nitrogen significantly increased CP and NDF concentrations in stems and leaves and ADF concentrations in stems. Mombasa produced 17 and 19% more leaf and 18 and 22% more total DM than Tanzania, in the first and the second year, respectively. Mombasa also produced 30% more stem DM than Tanzania in the second year. While Tanzania produced higher CP levels than Mombasa in the establishment year, in the second year, Tanzania had higher levels than Mombasa only when N rates of 80–100 kg N/ha were applied every 40-45 days (320–400 kg N/ha for growing period. Applying 60 kg N/ha every 40-45 days appears to be a reasonable compromise to achieve satisfactory DM yields in the wet season (8,000 kg/ha first year and 12,000 kg/ha second year, leaf percentage of 68–70% and leaf CP concentrations above 7%.Keywords: Guinea grass, crude protein, leaf production, fertilizer responses.DOI: 10.17138/TGFT(327-33
Hagemann, Ian S; O'Neill, Patrick K; Erill, Ivan; Pfeifer, John D
2015-09-01
The information-theoretic concept of Shannon entropy can be used to quantify the information provided by a diagnostic test. We hypothesized that in tumor types with stereotyped mutational profiles, the results of NGS testing would yield lower average information than in tumors with more diverse mutations. To test this hypothesis, we estimated the entropy of NGS testing in various cancer types, using results obtained from clinical sequencing. A set of 238 tumors were subjected to clinical targeted NGS across all exons of 27 genes. There were 120 actionable variants in 109 cases, occurring in the genes KRAS, EGFR, PTEN, PIK3CA, KIT, BRAF, NRAS, IDH1, and JAK2. Sequencing results for each tumor were modeled as a dichotomized genotype (actionable mutation detected or not detected) for each of the 27 genes. Based upon the entropy of these genotypes, sequencing was most informative for colorectal cancer (3.235 bits of information/case) followed by high grade glioma (2.938 bits), lung cancer (2.197 bits), pancreatic cancer (1.339 bits), and sarcoma/STTs (1.289 bits). In the most informative cancer types, the information content of NGS was similar to surgical pathology examination (modeled at approximately 2-3 bits). Entropy provides a novel measure of utility for laboratory testing in general and for NGS in particular. This metric is, however, purely analytical and does not capture the relative clinical significance of the identified variants, which may also differ across tumor types.
Raynald Labrecque
2009-11-01
Full Text Available It is known that mechanical work, and in turn electricity, can be produced from a difference in the chemical potential that may result from a salinity gradient. Such a gradient may be found, for instance, in an estuary where a stream of soft water is flooding into a sink of salty water which we may find in an ocean, gulf or salt lake. Various technological approaches are proposed for the production of energy from a salinity gradient between a stream of soft water and a source of salty water. Before considering the implementation of a typical technology, it is of utmost importance to be able to compare various technological approaches, on the same basis, using the appropriate variables and mathematical formulations. In this context, exergy balance can become a very useful tool for an easy and quick evaluation of the maximum thermodynamic work that can be produced from energy systems. In this short paper, we briefly introduce the use of exergy for enabling us to easily and quickly assess the theoretical maximum power or ideal reversible work we may expect from typical salinity gradient energy systems.
Tripathi, Brijesh; Sircar, Ratna
2016-09-01
The maximum performance of nc-Si:H/a-Si:H quantum well solar cell is theoretically evaluated by studying the spectral absorption of incident radiation with respect to the number of inserted nc-Si:H quantum well layers. Fundamental intrinsic properties of a-Si:H and nc-Si:H materials reported in literature have been used to evaluate the performance parameters. Enhanced spectral absorption is recorded due to insertion of nc-Si:H quantum well layers in the intrinsic region of a-Si:H solar cell. By inserting 50 QW layers of nc-Si:H in the intrinsic region of the a-Si:H solar cell, the short-circuit current density (JSC) increases by ∼100% as compared to the baseline whereas the open-circuit voltage (VOC) decreases by ∼38%. The decrease in VOC is explained on the basis of quasi-Fermi level separation under the illuminated state of solar cell. Theoretical maximum efficiency, having the combined effect of the increase in JSC and decrease in VOC, has increased by ∼24% in comparison with the baseline due to the use of QW as calculated using ideal carrier lifetime value. With a realistic carrier lifetime of the state-of-the-art a-Si:H solar cells, the addition of QWs do not yield any significant gain. From this study, it is concluded that a high carrier lifetime is required to gain a noteworthy benefit from the nc-Si:H/a-Si:H QWs.
Mao, Jia; Heck, Barbara; Reiter, Günter; Laborie, Marie-Pierre
2015-03-06
We report on near theoretical yield production of cellulose I nanocrystals (CNCs) using a two-step hydrolysis with the mildly acidic ionic liquid (IL) 1-butyl-3-methylimidazolium hydrogen sulfate ([Bmim]HSO4) in aqueous solution from common cellulosic sources. Two successive Taguchi experimental plans were performed to evaluate the impact of selected reaction parameters (T, t, H2O:IL ratio) and their interactions on the CNCs' yield from bleached softwood kraft pulp (SWP), bleached hardwood kraft pulp (HWP) and microcrystalline cellulose (MCC). With these experimental plans, the molar yield for extraction of nanocrystals was optimized to near theoretical levels, reaching 57.7±3.0%, 57.0±2.0%, and 75.6±3.0%, for SWP, HWP and MCC, respectively. The reaction yields corresponded to a relative crystalline region recovery of 84.1±5.3%, 71.7±1.3%, 76.0±2.0% from SWP, HWP and MCC, respectively. The collected nanocrystals exhibited high aspect ratios (36-43), negligible sulfur content (0.02-0.21%) and high solvent dispersibility in comparison to those obtained with the traditional sulfuric acid method. Additionally these near theoretical yields were achieved for mild reaction conditions with the combined severity factor of 2 and 3 for MCC and pulp, respectively. Overall this two-stage IL-mediated preparation of nanocrystals combines the advantages of achieving high product quality, high reaction yields and mild conditions.
King, Zachary A.; Feist, Adam
2014-01-01
specificity of central metabolic enzymes (especially GAPD and ALCD2x) is shown to increase NADPH production and increase theoretical yields for native products in E. coli and yeast-including l-aspartate, l-lysine, l-isoleucine, l-proline, l-serine, and putrescine-and non-native products in E. coli-including 1...
J. L. Ramírez
2011-06-01
Full Text Available ResumenEn un diseño de bloques al azar con 4 réplicas se evaluó la influenciade los indicadores climáticos en la producción y calidad del pastoPanicum maximum vc. Likoni, y se establecieron expresionesmatemáticas que permiten relacionar estos indicadores con elrendimiento y la calidad.SummraryIn a design of blocks at random the relationship of the climatic factors was evaluated with the yield and indicators of quality of the grass Panicum maximum vc. Likoni.
Arbones, B.; Figueiras, F. G.; Varela, R.
2000-09-01
Spectral and non-spectral measurements of the maximum quantum yield of carbon fixation for natural phytoplankton assemblages were compared in order to evaluate their effect on bio-optical models of primary production. Field samples were collected from two different coastal regions of NW Spain in spring, summer and autumn and in a polar environment (Gerlache Strait, Antarctica) during the austral summer. Concurrent determinations were made of spectral phytoplankton absorption coefficient [ aph( λ)], white-light-limited slope of the photosynthesis-irradiance relationships ( αB), carbon uptake action spectra [ αB( λ)], broad-band maximum quantum yields ( φm), and spectral maximum quantum yields [ φm( λ)]. Carbon uptake action spectra roughly followed the shape of the corresponding phytoplankton absorption spectra but with a slight displacement in the blue-green region that could be attributed to imbalance between the two photosystems PS I and PS II. Results also confirmed previous observations of wavelength dependency of maximum quantum yield. The broad-band maximum quantum yield ( φm) calculated considering the measured spectral phytoplankton absorption coefficient and the spectrum of the light source of the incubators was not significantly different form the averaged spectral maximum quantum yield [ overlineφ max(λ) ] ( t-test for paired samples, P=0.34). These results suggest that maximum quantum yield can be estimated with enough accuracy from white-light P- E curves and measured phytoplankton absorption spectra. Primary production at light limiting regimes was compared using four different models with a varying degree of spectral complexity. No significant differences ( t-test for paired samples, P=0.91) were found between a spectral model based on the carbon uptake action spectra [ αB( λ) — model a] and a model which uses the broad-band φm and measured aph( λ) (model b). In addition, primary production derived from constructed action spectra [ ac
The realistic energy yield potential of GaAs-on-Si tandem solar cells: a theoretical case study.
Liu, Haohui; Ren, Zekun; Liu, Zhe; Aberle, Armin G; Buonassisi, Tonio; Peters, Ian Marius
2015-04-06
Si based tandem solar cells represent an alternative to traditional compound III-V multijunction cells as a promising way to achieve high efficiencies. A theoretical study on the energy yield of GaAs on Si (GaAs/Si) tandem solar cells is performed to assess their energy yield potential under realistic illumination conditions with varying spectrum. We find that the yield of a 4-terminal contact scheme with thick top cell is more than 15% higher than for a 2-terminal scheme. Furthermore, we quantify the main losses that occur for this type of solar cell under varying spectra. Apart from current mismatch, we find that a significant power loss can be attributed to low irradiance seen by the sub-cells. The study shows that despite non-optimal bandgap combination, GaAs/Si tandem solar cells have the potential to surpass 30% energy conversion efficiency.
Mehrotra, Shakti; Prakash, O; Khan, Feroz; Kukreja, A K
2013-02-01
KEY MESSAGE : ANN-based combinatorial model is proposed and its efficiency is assessed for the prediction of optimal culture conditions to achieve maximum productivity in a bioprocess in terms of high biomass. A neural network approach is utilized in combination with Hidden Markov concept to assess the optimal values of different environmental factors that result in maximum biomass productivity of cultured tissues after definite culture duration. Five hidden Markov models (HMMs) were derived for five test culture conditions, i.e. pH of liquid growth medium, volume of medium per culture vessel, sucrose concentration (%w/v) in growth medium, nitrate concentration (g/l) in the medium and finally the density of initial inoculum (g fresh weight) per culture vessel and their corresponding fresh weight biomass. The artificial neural network (ANN) model was represented as the function of these five Markov models, and the overall simulation of fresh weight biomass was done with this combinatorial ANN-HMM. The empirical results of Rauwolfia serpentina hairy roots were taken as model and compared with simulated results obtained from pure ANN and ANN-HMMs. The stochastic testing and Cronbach's α-value of pure and combinatorial model revealed more internal consistency and skewed character (0.4635) in histogram of ANN-HMM compared to pure ANN (0.3804). The simulated results for optimal conditions of maximum fresh weight production obtained from ANN-HMM and ANN model closely resemble the experimentally optimized culture conditions based on which highest fresh weight was obtained. However, only 2.99 % deviation from the experimental values could be observed in the values obtained from combinatorial model when compared to the pure ANN model (5.44 %). This comparison showed 45 % better potential of combinatorial model for the prediction of optimal culture conditions for the best growth of hairy root cultures.
Baker, Erin J.; Kellogg, Christina A.
2014-01-01
Coral microbiology is an expanding field, yet there is no standard DNA extraction protocol. Although many researchers depend on commercial extraction kits, no specific kit has been optimized for use with coral samples. Both soil and plant DNA extraction kits from MO BIO Laboratories, Inc., have been used by many research groups for this purpose. MO BIO recently replaced their PowerPlant® kit with an improved PowerPlantPro kit, but it was unclear how these changes would affect the kit’s use with coral samples. In order to determine which kit produced the best results, we conducted a comparison between the original PowerPlant kit, the new PowerPlantPro kit, and an alternative kit, PowerSoil, using samples from several different coral genera. The PowerPlantPro kit had the highest DNA yields, but the lack of 16S rRNA gene amplification in many samples suggests that much of the yield may be coral DNA rather than microbial DNA. The most consistent positive amplifications came from the PowerSoil kit.
Mao, Fangjie; Zhou, Guomo; Li, Pingheng; Du, Huaqiang; Xu, Xiaojun; Shi, Yongjun; Mo, Lufeng; Zhou, Yufeng; Tu, Guoqing
2017-04-15
The selective cutting method currently used in Moso bamboo forests has resulted in a reduction of stand productivity and carbon sequestration capacity. Given the time and labor expense involved in addressing this problem manually, simulation using an ecosystem model is the most suitable approach. The BIOME-BGC model was improved to suit managed Moso bamboo forests, which was adapted to include age structure, specific ecological processes and management measures of Moso bamboo forest. A field selective cutting experiment was done in nine plots with three cutting intensities (high-intensity, moderate-intensity and low-intensity) during 2010-2013, and biomass of these plots was measured for model validation. Then four selective cutting scenarios were simulated by the improved BIOME-BGC model to optimize the selective cutting timings, intervals, retained ages and intensities. The improved model matched the observed aboveground carbon density and yield of different plots, with a range of relative error from 9.83% to 15.74%. The results of different selective cutting scenarios suggested that the optimal selective cutting measure should be cutting 30% culms of age 6, 80% culms of age 7, and all culms thereafter (above age 8) in winter every other year. The vegetation carbon density and harvested carbon density of this selective cutting method can increase by 74.63% and 21.5%, respectively, compared with the current selective cutting measure. The optimized selective cutting measure developed in this study can significantly promote carbon density, yield, and carbon sink capacity in Moso bamboo forests.
Jacob N. Chung
2014-01-01
Full Text Available Two concept systems that are based on the thermochemical process of high-temperature steam gasification of lignocellulosic biomass and municipal solid waste are introduced. The primary objectives of the concept systems are 1 to develop the best scientific, engineering, and technology solutions for converting lignocellulosic biomass, as well as agricultural, forest and municipal waste to clean energy (pure hydrogen fuel, and 2 to minimize water consumption and detrimental impacts of energy production on the environment (air pollution and global warming. The production of superheated steam is by hydrogen combustion using recycled hydrogen produced in the first concept system while in the second concept system concentrated solar energy is used for the steam production. A membrane reactor that performs the hydrogen separation and water gas shift reaction is involved in both systems for producing more pure hydrogen and CO2 sequestration. Based on obtaining the maximum hydrogen production rate the hydrogen recycled ratio is around 20% for the hydrogen combustion steam heating system. Combined with pure hydrogen production, both high temperature steam gasification systems potentially possess more than 80% in first law overall system thermodynamic efficiencies.
Thompson, William L.; Lee, Danny C.
2000-11-01
Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.
Woittiez, Lotte S.; Wijk, van Mark T.; Slingerland, Maja; Noordwijk, van Meine; Giller, Ken E.
2017-01-01
Oil palm, currently the world's main vegetable oil crop, is characterised by a large productivity and a long life span (≥25 years). Peak oil yields of 12 t ha−1 yr−1 have been achieved in small plantations, and maximum theoretical yields as calculated with simulation models are 18.5 t oil ha−1 yr−1,
Érica Matsumoto de Souza
2005-08-01
Full Text Available Foi conduzido um experimento no período de 06/09/2000 a 18/09/2001, para se avaliar os efeitos da irrigação e da adubação nitrogenada sobre a massa de forragem de cinco diferentes cultivares de Panicum maximum Jacq. Avaliaram-se cinco cultivares (Guiné, Colonião, Mombaça, Tanzânia e Centauro, três doses de nitrogênio (50, 75 e 100 kg de N/ha/corte e a presença e ausência de irrigação, em delineamento de blocos casualizados, em esquema fatorial de 5 x 3 x 2, com quatro repetições. A irrigação promoveu aumentos significativos nas produções de massa de forragem (PMF para todos os cultivares. Na estação chuvosa (OUT/00 a MAR/01, as doses de 75 e 100 kg/ha de N por corte proporcionaram maiores PMF que a de 50 kg/ha de N por corte. O cultivar Mombaça apresentou maior PMF, na presença de irrigação, que os demais cultivares, enquanto, na ausência, todos os cultivares apresentaram produções semelhantes. Na estação seca (MAI/01 a AGO/01, a irrigação promoveu aumento significativo na PMF, em relação ao tratamento sem irrigação. À medida que se elevaram as doses de N, aumentou-se a PMF, enquanto, na ausência de irrigação, houve diferença entre as doses de 50 e 100 kg/ha de N por corte. Observou-se o efeito positivo da irrigação sobre a PMF a partir do final da estação seca, quando as temperaturas começaram a se elevar e o fotoperíodo provavelmente não era mais fator limitante; ou seja, foi possível antecipar a estação de crescimento das forrageiras a partir do mês de agosto, com PMF média de 1 a 2 t/ha, para a dose de 75 kg/ha de N por corte e de até 3 t/ha, na dose de 100 kg/ha de N por corte, em relação ao tratamento sem irrigação.This trial was carried out from September 6th (2000 to September 18th (2001 to evaluate the effects of irrigation and nitrogen fertilization on drymatter yield of five different cultivars of Panicum maximum. Five cultivars (Guiné, Colonião, Mombaça, Tanzânia and
Babin, Marcel; Morel, André; Claustre, Hervé; Bricaud, Annick; Kolber, Zbigniew; Falkowski, Paul G.
1996-08-01
Natural variability of the maximum quantum yield of carbon fixation ( φC max), as determined from the initial slope of the photosynthesis-irradiance curve and from light absorption measurements, was studied at three sites in the northeast tropical Atlantic representing typical eutrophic, mesotrophic and oligotrophic regimes. At the eutrophic and mesotrophic sites, where the mixed layer extended deeper than the euphotic layer, all photosynthetic parameters were nearly constant with depth, and φC max averaged between 0.05 and 0.03 molC (mol quanta absorbed) -1, respectively. At the oligotrophic site, a deep chlorophyll maximum (DCM) existed and φC max varied from ca 0.005 in the upper nutrient-depleted mixed layer to 0.063 below the DCM in stratified waters. firstly, φC max was found roughly to covary with nitrate concentration between sites and with depth at the oligotrophic site, and secondly, it was found to decrease with increasing relative concentrations of non-photosynthetic pigments. The extent of φC max variations directly related to nitrate concentration was inferred from variations in the fraction of functional PS2 reaction centers ( f), measured using fast repetition rate fluorometry. Covariations between f and nitrate concentration indicate that the latter factor may be responsible for a 2-fold variation in φC max. Moreover, partitioning light absorption between photosynthetic and non-photosynthetic pigments suggests that the variable contribution of the non-photosynthetic absorption may explain a 3-fold variation in φC max, as indicated by variations in the effective absorption cross-section of photosystem 2 ( σPS2). Results confirm the role of nitrate in φC max variation, and emphasize those of light and vertical mixing.
Sher Khan Panhwar; LIU Qun; Fozia Khan; Pirzada J. A. Siddiqui
2012-01-01
Using surplus production model packages of ASPIC (a stock-production model incorporating covariates) and CEDA (Catch effort data analysis),we analyzed the catch and effort data of Sillago sihama fishery in Pakistan.ASPIC estimates the parameters of MSY(maximum sustainable yield),Fmsy (fishing mortality),q (catchability coefficient),K(carrying capacity or unexploited biomass) and Bl/K(maximum sustainable yield over initial biomass).The estimated non-bootstrapped value of MSY based on logistic was 598 t and that based on the Fox model was 415 t,which showed that the Fox model estimation was more conservative than that with the logistic model.The R2 with the logistic model (0.702) is larger than that with the Fox model (0.541),which indicates a better fit.The coefficient of variation (cv) of the estimated MSY was about 0.3,except for a larger value 88.87 and a smaller value of 0.173.In contrast to the ASPIC results,the R2 with the Fox model (0.651-0.692) was larger than that with the Schaefer model (0.435-0.567),indicating a better fit.The key parameters of CEDA are:MSY,K,q,and r (intrinsic growth),and the three error assumptions in using the models are normal,log normal and gamma.Parameter estimates from the Schaefer and Pella-Tomlinson models were similar.The MSY estimations from the above two models were 398 t,549 t and 398 t for normal,log-normal and gamma error distributions,respectively.The MSY estimates from the Fox model were 381 t,366 t and 366 t for the above three error assumptions,respectively.The Fox model estimates were smaller than those for the Schaefer and the Pella-Tomlinson models.In the light of the MSY estimations of 415 t from ASPIC for the Fox model and 381 t from CEDA for the Fox model,MSY for S.sihama is about 400 t.As the catch in 2003was 401 t,we would suggest the fishery should be kept at the current level.Production models used here depend on the assumption that CPUE(catch per unit effort) data used in the study can reliably quantify
Mishra, Manish Kumar; Mukherjee, Arijit; Ramamurty, Upadrasta; Desiraju, Gautam R
2015-11-01
A new monoclinic polymorph, form II (P21/c, Z = 4), has been isolated for 3,4-dimethoxycinnamic acid (DMCA). Its solid-state 2 + 2 photoreaction to the corresponding α-truxillic acid is different from that of the first polymorph, the triclinic form I ([Formula: see text], Z = 4) that was reported in 1984. The crystal structures of the two forms are rather different. The two polymorphs also exhibit different photomechanical properties. Form I exhibits photosalient behavior but this effect is absent in form II. These properties can be explained on the basis of the crystal packing in the two forms. The nanoindentation technique is used to shed further insights into these structure-property relationships. A faster photoreaction in form I and a higher yield in form II are rationalized on the basis of the mechanical properties of the individual crystal forms. It is suggested that both Schmidt-type and Kaupp-type topochemistry are applicable for the solid-state trans-cinnamic acid photodimerization reaction. Form I of DMCA is more plastic and seems to react under Kaupp-type conditions with maximum molecular movements. Form II is more brittle, and its interlocked structure seems to favor Schmidt-type topochemistry with minimum molecular movement.
Leclercq, C; Arcella, D; Turrini, A
2000-12-01
The three recent EU directives which fixed maximum permitted levels (MPL) for food additives for all member states also include the general obligation to establish national systems for monitoring the intake of these substances in order to evaluate their use safety. In this work, we considered additives with primary antioxidant technological function for which an acceptable daily intake (ADI) was established by the Scientific Committee for Food (SCF): gallates, butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and erythorbic acid. The potential intake of these additives in Italy was estimated by means of a hierarchical approach using, step by step, more refined methods. The likelihood of the current ADI to be exceeded was very low for erythorbic acid, BHA and gallates. On the other hand, the theoretical maximum daily intake (TMDI) of BHT was above the current ADI. The three food categories found to be main potential sources of BHT were "pastry, cake and biscuits", "chewing gums" and "vegetables oils and margarine"; they overall contributed 74% of the TMDI. Actual use of BHT in these food categories is discussed, together with other aspects such as losses of this substance in the technological process and percentage of ingestion in the case of chewing gums.
Elhkim, Mostafa Ould; Héraud, Fanny; Bemrah, Nawel; Gauchard, Françoise; Lorino, Tristan; Lambré, Claude; Frémy, Jean Marc; Poul, Jean-Michel
2007-04-01
Tartrazine is an artificial azo dye commonly used in human food and pharmaceutical products. Since the last assessment carried out by the JECFA in 1964, many new studies have been conducted, some of which have incriminated tartrazine in food intolerance reactions. The aims of this work are to update the hazard characterization and to revaluate the safety of tartrazine. Our bibliographical review of animal studies confirms the initial hazard assessment conducted by the JECFA, and accordingly the ADI established at 7.5mg/kg bw. From our data, in France, the estimated maximum theoretical intake of tartrazine in children is 37.2% of the ADI at the 97.5th percentile. It may therefore be concluded that from a toxicological point of view, tartrazine does not represent a risk for the consumer. It appears more difficult to show a clear relationship between ingestion of tartrazine and the development of intolerance reactions in patients. These reactions primarily occur in patients who also suffer from recurrent urticaria or asthma. The link between tartrazine consumption and these reactions is often overestimated, and the pathogenic mechanisms remain poorly understood. The prevalence of tartrazine intolerance is estimated to be less than 0.12% in the general population. Generally, the population at risk is aware of the importance of food labelling, with the view of avoiding consumption of tartrazine. However, it has to be mentioned that products such as ice creams, desserts, cakes and fine bakery are often sold loose without any labelling.
MA Jun; TAO Shi-shun
2002-01-01
In this paper, a new cultivation practice-super-sparse-cultivation associated with maximumtiller seedling (SSCMTS) of hybrid rice was proposed and its high-yielding mechanism was studied. The results showed that the practice of SSCMTS in hybrid rice could not only increase grain yield but also save seeds and labor. It was a new high-yielding way for the late transplanting seedlings and heavy panicle type hybrid rice cultivars to further utilize the high-yielding potential of hybrid rice cultivars. The increasing number of spikelets and ideal grain -filling were the direct factors for the high yield of SSCMTS in hybrid rice, and those high-yielding factors relied on high quality seedlings, sturdy individuals, high quality population and vigorous later growth.
赵志龙; 刘林; 陈铮
2004-01-01
The variation of yield strength along rolling direction, transverse direction and 45° to transverse direction of 2090 Al-Li alloy and 2090+Ce alloy sheet containing rare earth cerium was comparatively investigated. The difference of deformation texture in these two alloy sheets was analyzed by means of X-ray orientation distribution function (ODF). The results show that cerium has the effects of enhancing the Brass and S rolling texture components and reducing the recrystallized texture components of Cube and Goss. This is the reason that the anisotropic degree of yield strength in 2090+Ce sheet is higher than that of 2090 alloy sheet. The prediction of yield strength along various orientations in two alloy sheets was done based on Taylor/Bishop-Hill model, and the strengthening effect of grain boundary was evaluated using Hall-Petch relationship. A modified plastic inclusion model was proposed using the concept of grain-orientation factor and T1 phase orientation factor by fitting with tensile test results.
A. B.M. Hossain
2008-01-01
Full Text Available The study was conducted to investigate the effect of ethanol at different concentrations (ET on bougainvillea flower longevity and delay senescence in storage condition. The treatments were water control, 2% ET, 4% ET, 8% ET, 10% ET, 20% ET, 30% ET, 40% ET, 50% ET and 70% ET. Flower longevity was 2 days more in 4, 8% and 10% ethanol than water control and other concentrations of ethanol. Petal wilting and senescence were occurred 2 days later in 4, 8 and 10% ET than in water control. Percent petal's color changed was later in water 4, 8% and 10% than in control, 2, 20, 30, 40, 50 and 70% ET. Chlorophyll fluorescence intensity (photosynthetic yield followed by time (ms at different ethanol concentrations was higher in 4, 8 and 10% ET than in water control and other concentrations. Fo (lower fluorescence was lower in 4, 8 and 10% ET than in water and other concentrations. However, Fm and Fv [(higher fluorescence and relative variable fluorescence (Fm-Fo] were higher in 4, 8 and 10% ET than in other ET concentrations. Fv/Fm (quantum yield or photosynthetic yield was higher in 4, 8 and 10% ET than in other ET concentrations. The result showed flower vase life was significantly affected by ethanol concentrations and longevity was higher in 4, 8 and 10% ethanol than in water control and other concentrations.
Butkovskaya, Nadezhda; Rayez, Marie-Thérèse; Rayez, Jean-Claude; Kukui, Alexandre; Le Bras, Georges
2009-10-22
The influence of water vapor on the production of nitric acid in the gas-phase HO(2) + NO reaction was determined at 298 K and 200 Torr using a high-pressure turbulent flow reactor coupled with a chemical ionization mass spectrometer. The yield of HNO(3) was found to increase linearly with the increase of water concentration reaching an enhancement factor of about 8 at [H(2)O] = 4 x 10(17) molecules cm(-3) ( approximately 50% relative humidity). A rate constant value k(1bw) = 6 x 10(-13) cm(3) molecule(-1) s(-1) was derived for the reaction involving the HO(2)xH(2)O complex: HO(2)xH(2)O + NO --> HNO(3) (1bw), assuming that the water enhancement is due to this reaction. k(1bw) is approximately 40 times higher than the rate constant of the reaction HO(2) + NO --> HNO(3) (1b), at the same temperature and pressure. The experimental findings are corroborated by density functional theory (DFT) calculations performed on the H(2)O/HO(2)/NO system. The significance of this result for atmospheric chemistry and chemical amplifier instruments is briefly discussed. An appendix containing a detailed consideration of the possible contribution from the surface reactions in our previous studies of the title reaction and in the present one is included.
Matsumura, Masashi; Ichikawa, Kazuna; Takei, Hitoshi
2017-01-01
This study attempted to develop a formula for predicting maximum muscle strength value for young, middle-aged, and elderly adults using theoretical Grade 3 muscle strength value (moment fair: Mf)—the static muscular moment to support a limb segment against gravity—from the manual muscle test by Daniels et al. A total of 130 healthy Japanese individuals divided by age group performed isometric muscle contractions at maximum effort for various movements of hip joint flexion and extension and knee joint flexion and extension, and the accompanying resisting force was measured and maximum muscle strength value (moment max, Mm) was calculated. Body weight and limb segment length (thigh and lower leg length) were measured, and Mf was calculated using anthropometric measures and theoretical calculation. There was a linear correlation between Mf and Mm in each of the four movement types in all groups, excepting knee flexion in elderly. However, the formula for predicting maximum muscle strength was not sufficiently compatible in middle-aged and elderly adults, suggesting that the formula obtained in this study is applicable in young adults only. PMID:28133549
Usa, Hideyuki; Matsumura, Masashi; Ichikawa, Kazuna; Takei, Hitoshi
2017-01-01
This study attempted to develop a formula for predicting maximum muscle strength value for young, middle-aged, and elderly adults using theoretical Grade 3 muscle strength value (moment fair: Mf )-the static muscular moment to support a limb segment against gravity-from the manual muscle test by Daniels et al. A total of 130 healthy Japanese individuals divided by age group performed isometric muscle contractions at maximum effort for various movements of hip joint flexion and extension and knee joint flexion and extension, and the accompanying resisting force was measured and maximum muscle strength value (moment max, Mm ) was calculated. Body weight and limb segment length (thigh and lower leg length) were measured, and Mf was calculated using anthropometric measures and theoretical calculation. There was a linear correlation between Mf and Mm in each of the four movement types in all groups, excepting knee flexion in elderly. However, the formula for predicting maximum muscle strength was not sufficiently compatible in middle-aged and elderly adults, suggesting that the formula obtained in this study is applicable in young adults only.
Hideyuki Usa
2017-01-01
Full Text Available This study attempted to develop a formula for predicting maximum muscle strength value for young, middle-aged, and elderly adults using theoretical Grade 3 muscle strength value (moment fair: Mf—the static muscular moment to support a limb segment against gravity—from the manual muscle test by Daniels et al. A total of 130 healthy Japanese individuals divided by age group performed isometric muscle contractions at maximum effort for various movements of hip joint flexion and extension and knee joint flexion and extension, and the accompanying resisting force was measured and maximum muscle strength value (moment max, Mm was calculated. Body weight and limb segment length (thigh and lower leg length were measured, and Mf was calculated using anthropometric measures and theoretical calculation. There was a linear correlation between Mf and Mm in each of the four movement types in all groups, excepting knee flexion in elderly. However, the formula for predicting maximum muscle strength was not sufficiently compatible in middle-aged and elderly adults, suggesting that the formula obtained in this study is applicable in young adults only.
Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai
2014-07-07
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.
Annika K. Jägerbrand
2016-04-01
Full Text Available There is limited knowledge available on the thermal acclimation processes for bryophytes, especially when considering variation between populations or sites. This study investigated whether short-term ex situ thermal acclimation of different populations showed patterns of site dependency and whether the maximum quantum yield of PSII (Fv/Fm could be used as an indicator of adaptation or temperature stress in two bryophyte species: Pleurozium schreberi (Willd. ex Brid. Mitt. and Racomitrium lanuginosum (Hedw. Brid. We sought to test the hypothesis that differences in the ability to acclimate to short-term temperature treatment would be revealed as differences in photosystem II maximum yield (Fv/Fm. Thermal treatments were applied to samples from 12 and 11 populations during 12 or 13 days in growth chambers and comprised: (1 10/5 °C; (2 20/10 °C; (3 25/15 °C; (4 30/20 °C (12 hours day/night temperature. In Pleurozium schreberi, there were no significant site-dependent differences before or after the experiment, while site dependencies were clearly shown in Racomitrium lanuginosum throughout the study. Fv/Fm in Pleurozium schreberi decreased at the highest and lowest temperature treatments, which can be interpreted as a stress response, but no similar trends were shown by Racomitrium lanuginosum.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Li, Meng; Wang, Jun; Du, Fu; Diallo, Boubacar; Xie, Guang Hui
2017-01-01
Due to its chemical composition and abundance, lignocellulosic biomass is an attractive feedstock source for global bioenergy production. However, chemical composition variations interfere with the success of any single methodology for efficient bioenergy extraction from diverse lignocellulosic biomass sources. Although chemical component distributions could guide process design, they are difficult to obtain and vary widely among lignocellulosic biomass types. Therefore, expensive and laborious "one-size-fits-all" processes are still widely used. Here, a non-destructive and rapid analytical technology, near-infrared spectroscopy (NIRS) coupled with multivariate calibration, shows promise for addressing these challenges. Recent advances in molecular spectroscopy analysis have led to methodologies for dual-optimized NIRS using sample subset partitioning and variable selection, which could significantly enhance the robustness and accuracy of partial least squares (PLS) calibration models. Using this methodology, chemical components and theoretical ethanol yield (TEY) values were determined for 70 sweet and 77 biomass sorghum samples from six sweet and six biomass sorghum varieties grown in 2013 and 2014 at two study sites in northern China. Chemical components and TEY of the 147 bioenergy sorghum samples were initially analyzed and compared using wet chemistry methods. Based on linear discriminant analysis, a correct classification assignment rate (either sweet or biomass type) of 99.3% was obtained using 20 principal components. Next, detailed statistical analysis demonstrated that partial optimization using sample set partitioning based on joint X-Y distances (SPXY) for sample subset partitioning enhanced the robustness and accuracy of PLS calibration models. Finally, comparisons between five dual-optimized strategies indicated that competitive adaptive reweighted sampling coupled with the SPXY (CARS-SPXY) was the most efficient and effective method for improving
Titarenko, Y E; Karpikhin, E I
2003-01-01
The objective of the project is measurements and computer simulations of independent and cumulative yields of residual product nuclei in thin targets relevant as target materials and structure materials for hybrid accelerator-driven systems coupled to high-energy proton accelerators. The yields of residual product nuclei are of great importance when estimating such basic radiation-technology characteristics of hybrid facility targets as the total target activity, target 'poisoning', buildup of long-lived nuclides that, in turn, are to be transmuted, product nuclide (Po) alpha-activity, content of low-pressure evaporated nuclides (Hg), content of chemically-active nuclides that spoil drastically the corrosion resistance of the facility structure materials, etc. In view of the above, radioactive product nuclide yields from targets and structure materials were determined by an experiment using the ITEP U-10 proton accelerator in 51 irradiation runs for different thin targets: sup 1 sup 8 sup 2 sup , sup 1 sup 8 ...
Walch, Stephen P.; Duchovic, Ronald J.
1991-01-01
Computed energies and geometries are reported which, combined with previously published calculations, permit a global representation of the potential energy surface for the reaction H + O2 yields HO2(asterisk) yields OH + O. These new calculations characterize the potential energy surface (PES) for all H atom angles of approach to O2 and for the region of the inner repulsive wall. The region of the T-shaped H-O2 exchange saddle point is connected with the constrained energy minimum (CEM) path, and a new collinear H-O2 exchange saddle point is characterized which lies only 9 kcal/mol above the H + O2 asymptote. A vibrational analysis which utilizes local cubic and quartic polynomial representations of the PES along the CEM path has been carried out. Optimal geometries, energies, and harmonic frequencies are reported along with anharmonic analyses for the O2 and OH asymptotes and for the HO2 minimum region of the PES.
Selvakumaran, T S; Sen, Soubhadra; Baskaran, R
2014-11-01
Adopting Langevin methodology, a pressure dependent frictional force term which represents the collisional effect is added to the Lorentz equation. The electrons are assumed to be starting from the uniformly distributed co-ordinates on the central plane. The trajectory of each electron is numerically simulated by solving the modified Lorentz equation for a given pressure. The Bremsstrahlung x-ray energy spectrum for each electron crossing the cavity wall boundary is obtained using the Duane-Hunt law. The total x-ray yield is estimated by adding the spectral contribution of each electron. The calculated yields are compared with the experimental results and a good agreement is found.
Ramírez, J. L.
2010-07-01
Full Text Available ResumenEn un diseño de bloques al azar con 4 réplicas se evaluó la influencia de la edad de rebrote (30 a 105 días y los factores del clima en el rendimiento de materia y calidad nutritiva del pasto Panicum maximum vc. Likoni. El experimento se desarrolló en un suelo fluvisol en secano y sin fertilización. El rendimiento de MS se incrementó significativamente con la edad (P<0,001 y se ajustaron ecuaciones cuadráticas entre este y la edad, para ambos períodos, con valores superiores a los 90 días (7.23 lluvioso y 2,16 t/ha/corte poco lluvioso. Las variables climáticas mostraron altascorrelaciones (positivas y negativas con el rendimiento y la composición química, más acentuadas en el período poco lluvioso. La proteína bruta, digestibilidad de la MS y MO disminuyeron con la edad (P<0,001 y se ajustaron ecuaciones de regresión cuadrática entre estas variables y la edad, los mayores porcentajes se mostraron a la edad de 30 días en ambos períodos. La FND, FAD, lignina y la Celulosa se incrementaron con la edad (P<0,001, mostrando sus mayores valores a los 105 días de rebrote en ambos períodos y se ajustaron ecuaciones de regresión cuadrática de estas variables respecto a la edad. Se concluye que la edad y las condiciones climáticas tuvieron un marcado efecto en el comportamiento de los indicadores evaluados, más acentuado en el período lluvioso al disminuir la calidad nutritiva.SummaryIn a design of blocks the influence of the days of regrowth was evaluated at random (30 to 105 days and the factors of the climate in the matter yield and nutritious quality of the grass Panicum aximum vc. Likoni. No fertilization or irrigation was practiced. The yield of DM was increased significantly with the age (P <0,001 and quadratic equations were adjusted between this and the age, for both periods, with values superiors to the 90 days (7.23 rainy season and 2,16 dry season t/ha/cut. The climatic variables showed discharges correlations
Theoretical Study of Secondary Electron Yield in Energy Range of 10 ～ 30 keV%10～30 keV二次电子发射系数的表达式
谢爱根; 王祖松; 刘战辉; 詹煜; 吴红艳
2013-01-01
Here we addressed the theoretical subject of the secondary electron emission,in the energy range of 10 ～30keY.First,the formulae of the maximum second electron yield (δm),and the average number of secondary electrons released per primary electron with fairly high incident energy (δPE) were derived,respectively.Next,a general expression of δ in terms of the variables,including δm,atomic number,atomic weight,material density,back-scattering coefficient (γ),back-scattering coefficient at high energy (η),parameter A,energy exponent (n),and the incident energy of primary electron,was obtained,on the basis of the influence of δm and δPE on the secondary electron yield at high energy (δ).The parameter A and energy exponent n,in the energy range of 10 ～ 30 keV for some emitters of interest,were modeled and calculated with the software package ESTAR.The experimentally measured and calculated results of δ with the general formula were compared.The comparison result shows that when it comes to the secondary electron emission in the energy range of 10～ 30 keV,the newly-developed general formula of δ works fairly well for metals,semi-metals and element semiconductors.%根据二次电子发射的主要物理过程和特性,推导出最大二次电子发射系数(δm)的表达式.还推导出平均每个高能原电子发射的二次电子数(δPE)的表达式.根据δPE、δm和高能二次电子发射系数(δ)之间的关系,推导出以δm、原子序数、原子质量数、物质密度、背散射系数、高能背散射系数、参数A、能量幂次(n)和原电子入射能量为变量δ的通式.用ESTR程序计算出一些材料的10～ 30 keV能量范围内的参数A和n.用该通式计算出δ并与相应的实验值进行了比较,结果表明,成功地推导出金属、半金属和元素半导体10～ 30 keV的δ通式.
Luiz Augusto Fonseca Magalhães
2007-09-01
dry matter production of Tanzania grass (Panicum maximum Jacq. cv. Tanzania. The treatments were divided into groups 1 (G1 and 2 (G2. G1 evaluated three liming levels - no liming and liming to reach 30% and 60% base saturation (0, 1.12, and 2.64 t/ha, respectively – and three phosphorus sources – single superphosphate, Yoorin thermophosphate and Arad hyperphosphate. G2 evaluated the same liming levels and five phosphorus levels – 0, 30, 60, 120, and 240 kg/ha of P. In G1 no interactions were observed between phosphorus sources and liming, nor were liming effects. Significant differences were found for phosphorus sources: grass yield was higher under single superphosphate than under Yoorin thermophosphate or Arad hyperphosphate. Dry matter production in G2 didn't differ between liming levels and no interactions were observed between phosphorus and liming levels. However, there were significant differences among phosphorus levels, with maximum dry matter production for 172.8 kg/ha of P. This result confirms the importance of phosphorus fertilization to achieve high yields in Brazilian soils.
KEY-WORDS: Phosphorus; liming; forages; tanzania grass.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Soheyli, S.; Khanlari, M. Varasteh
2016-09-01
The relative yield of complete fusion and quasifission components for the 12C+204Pb , 19F+197Au , 30Si+186W , and 48Ca+168Er reactions which all lead to the compound nucleus 216Ra are analyzed to calculate the entrance channel effects by comparison of capture, complete fusion, and quasifission cross sections, emission barriers (Bfus*,Bq f ), as well as complete fusion probability estimated by statistical method within the framework of the dinuclear system model. The difference among complete fusion probabilities calculated by the dinuclear system model for different entrance channels can be explained by the hindrance to complete fusion due to the larger inner fusion barrier Bfus* for the transformation of the dinuclear system into a compound nucleus and the increase of the quasifission contribution due to the decreasing of the emission barrier Bq f of quasifission as a function of the angular momentum. Although these reactions with different entrance channels populate the same compound nucleus 216Ra at similar excitation energies, the model predicts the negligible quasifission probability for reactions having higher entrance channel mass asymmetry and the dominant decay channel is complete fission. For reactions induced by massive projectiles such as Si and Ca having lower entrance channel mass asymmetry, the quasifission component is dominant in the evolution of dinuclear system, and the fusion process is extremely hindered.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Grousseau, Estelle; Blanchet, Elise; Déléris, Stéphane; Albuquerque, Maria G E; Paul, Etienne; Uribelarrea, Jean-Louis
2013-11-01
In this study a complementary modeling and experimental approach was used to explore how growth controls the NADPH generation and availability, and the resulting impact on PHB (polyhydroxybutyrate) yields and kinetics. The results show that the anabolic demand allowed the NADPH production through the Entner-Doudoroff (ED) pathway, leading to a high maximal theoretical PHB production yield of 0.89 C mole C mole(-1); whereas without biomass production, NADPH regeneration is only possible via the isocitrate dehydrogenase leading to a theoretical yield of 0.67 C mole C mole(-1). Furthermore, the maximum specific rate of NADPH produced at maximal growth rate (to fulfil biomass requirement) was found to be the maximum set in every conditions, which by consequence determines the maximal PHB production rate. These results imply that sustaining a controlled residual growth improves the PHB specific production rate without altering production yield. Copyright © 2013 Elsevier Ltd. All rights reserved.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
ON RESERVES, STABILITY AND THE MAXIMUM SUSTAINABLE YIELD PROBLEM
Arild Wikan
2013-01-01
Full Text Available Due to the fact that several species of commercial interest have been overexploited throughout the years it is important to understand how harvest and other factors like introducing reserves influence the dynamic behaviour of a population with respect to stability and possible extinction. Therefore, simple one-population models are analysed and it is shown that harvest acts in a strong stabilizing fashion in the sense that it may transfer a population which exhibits chaotic oscillations to a state where the equilibrium population is stable. Moreover, if we divide the habitat of the population into a reserve and a harvest zone we find that increased harvest as well as migration between the two areas act stabilizing but that the former turns out to be the dominating one. If age structure is included in the population model harvest may not necessarily play the same role, especially if the number of age classes become large. Regarding MSY, we demonstrate that it is indeed possible to obtain the same MSY in the case where the habitat is split into a reserve and a harvest zone as in case of no reserve.
Consensus theoretic classification methods
Benediktsson, Jon A.; Swain, Philip H.
1992-01-01
Consensus theory is adopted as a means of classifying geographic data from multiple sources. The foundations and usefulness of different consensus theoretic methods are discussed in conjunction with pattern recognition. Weight selections for different data sources are considered and modeling of non-Gaussian data is investigated. The application of consensus theory in pattern recognition is tested on two data sets: 1) multisource remote sensing and geographic data and 2) very-high-dimensional remote sensing data. The results obtained using consensus theoretic methods are found to compare favorably with those obtained using well-known pattern recognition methods. The consensus theoretic methods can be applied in cases where the Gaussian maximum likelihood method cannot. Also, the consensus theoretic methods are computationally less demanding than the Gaussian maximum likelihood method and provide a means for weighting data sources differently.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
De Lara, Michel; Oliveros-Ramos, Ricardo; Tam, Jorge
2011-01-01
The World Summit on Sustainable Development (Johannesburg, 2002) encouraged the application of the ecosystem approach by 2010. However, at the same Summit, the signatory States undertook to restore and exploit their stocks at maximum sustainable yield (MSY), a concept and practice without ecosystemic dimension, since MSY is computed species by species, on the basis of a monospecific model. Acknowledging this gap, we propose a definition of "ecosystem viable yields" (EVY) as yields compatible i) with biological viability levels for all time and ii) with an ecosystem dynamics. To the difference of MSY, this notion is not based on equilibrium, but on viability theory, which offers advantages for robustness. For a generic class of multispecies models with harvesting, we provide explicit expressions for the EVY. We apply our approach to the anchovy--hake couple in the Peruvian upwelling ecosystem between the years 1971 and 1981.
王海军; 柳敏燕; 高娟
2013-01-01
为了解决传统的采用一元线性回归法计算农用地理论单产和可实现单产的局限性，该文将遗传算法和支持向量回归机理论应用于农用地产能核算，构建理论单产和可实现单产核算模型。通过建立样点分等因素质量分与理论单产的遗传算法支持向量回归机（genetic algorithm-support vector machine，GA-SVM）模型进行理论单产预测；建立样点分等因素质量分和利用系数之积与可实现单产的遗传算法支持向量回归机模型进行可实现单产预测。以广东省揭西县产能核算为例，分别采用GA-SVM模型和一元线性回归模型测算，并将测算结果进行对比分析。研究结果证明，GA-SVM对于理论单产和可实现单产的测算精度更高，适于单个样本值的预测，可以作为农用地产能核算的一种新方法。%Grain security is a complex, resource-intensive problem being addressed by governments, international organizations, and scientific community. Ensuring adequate grain supply is vital to the survival of humanity, and its key lies in improving total agricultural land productivity. The level of the comprehensive productive capacity of agriculture is directly related to the effective supply of grains. In 2011, China conducted a nationwide agricultural productivity survey in order to improve its overall capacity, protect farm quality, and accomplish the National Food Security Strategy. The core components of agricultural land productivity calculations are the theoretical and accessible yields. The theoretical yield calculation was traditionally performed by establishing samples’ linear regression models between natural level index and theoretical yield. The accessible yield, meanwhile, was performed by establishing samples’ linear regression models between usage level index and accessible yield. Then the agricultural land productivity can be calculated by the yield multiplying with the total area of
Yield Improvement in Steel Casting (Yield II)
Richard A. Hardin; Christoph Beckermann; Tim Hays
2002-02-18
This report presents work conducted on the following main projects tasks undertaken in the Yield Improvement in Steel Casting research program: Improvement of Conventional Feeding and Risering Methods, Use of Unconventional Yield Improvement Techniques, and Case Studies in Yield Improvement. Casting trials were conducted and then simulated using the precise casting conditions as recorded by the participating SFSA foundries. These results present a statistically meaningful set of experimental data on soundness versus feeding length. Comparisons between these casting trials and casting trials performed more than forty years ago by Pellini and the SFSA are quite good and appear reasonable. Comparisons between the current SFSA feeding rules and feeding rules based on the minimum Niyama criterion reveal that the Niyama-based rules are generally less conservative. The niyama-based rules also agree better with both the trials presented here, and the casting trails performed by Pellini an d the SFSA years ago. Furthermore, the use of the Niyama criterion to predict centerline shrinkage for horizontally fed plate sections has a theoretical basis according to the casting literature reviewed here. These results strongly support the use of improved feeding rules for horizontal plate sections based on the Niyama criterion, which can be tailored to the casting conditions for a given alloy and to a desired level of soundness. The reliability and repeatability of ASTM shrinkage x-ray ratings was investigated in a statistical study performed on 128 x-rays, each of which were rated seven different times. A manual ''Feeding and Risering Guidelines for Steel Castings' is given in this final report. Results of casting trials performed to test unconventional techniques for improving casting yield are presented. These use a stacked arrangement of castings and riser pressurization to increase the casting yield. Riser pressurization was demonstrated to feed a casting up to
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Bruus, Henrik
in complexity, a proper theoretical understanding becomes increasingly important. The basic idea of the book is to provide a self-contained formulation of the theoretical framework of microfluidics, and at the same time give physical motivation and examples from lab-on-a-chip technology. After three chapters...
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Rius, Jordi
2006-09-01
The maximum-likelihood method is applied to direct methods to derive a more general probability density function of the triple-phase sums which is capable of predicting negative values. This study also proves that maximization of the origin-free modulus sum function S yields, within the limitations imposed by the assumed approximations, the maximum-likelihood estimates of the phases. It thus represents the formal theoretical justification of the S function that was initially derived from Patterson-function arguments [Rius (1993). Acta Cryst. A49, 406-409].
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Sarker, Shiplu [Department of Renewable Energy, Faculty of Engineering and Science, University of Agder, Grimstad-4879 (Norway); Moeller, Henrik Bjarne [Department of Biosystems Engineering, Faculty of Science and Technology, Aarhus University, Research center Foulum, Blichers Alle, Post Box 50, Tjele-8830 (Denmark)
2013-07-01
Concentrated molasses (C5 molasses) from 2nd generation bioethanol plant has been investigated for enhancing productivity of manure based digesters. A batch study at mesophilic condition (35+- 1 deg C) showed the maximum methane yield from molasses as 286 LCH4/kgVS which was approximately 63% of the calculated theoretical yield. In addition to the batch study, co-digestion of molasses with cattle manure in a semi-continuously stirred reactor at thermophilic temperature (50+- 1 deg C) was also performed with a stepwise increase in molasses concentration. The results from this experiment revealed the maximum average biogas yield of 1.89 L/L/day when 23% VSmolasses was co-digested with cattle manure. However, digesters fed with more than 32% VSmolasses and with short adaptation period resulted in VFA accumulation and reduced methane productivity indicating that when using molasses as biogas booster this level should not be exceeded.
Shiplu Sarker, Henrik Bjarne Møller
2013-01-01
Full Text Available Concentrated molasses (C5 molasses from 2nd generation bioethanol plant has been investigated for enhancing productivity of manure based digesters. A batch study at mesophilic condition (35±1C showed the maximum methane yield from molasses as 286 LCH4/kgVS which was approximately 63% of the calculated theoretical yield. In addition to the batch study, co-digestion of molasses with cattle manure in a semi-continuously stirred reactor at thermophilic temperature (50±1°C was also performed with a stepwise increase in molasses concentration. The results from this experiment revealed the maximum average biogas yield of 1.89 L/L/day when 23% VSmolasses was co-digested with cattle manure. However, digesters fed with more than 32% VSmolasses and with short adaptation period resulted in VFA accumulation and reduced methane productivity indicating that when using molasses as biogas booster this level should not be exceeded.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Hansen, Jonas L; Nielsen, Jens H; Stapelfeldt, Henrik; Dimitrovski, Darko; Madsen, Lars Bojer
2011-01-01
The yield of strong-field ionization, by a linearly polarized probe pulse, is studied experimentally and theoretically, as a function of the relative orientation between the laser field and the molecule. Experimentally, carbonyl sulfide, benzonitrile and naphthalene molecules are aligned in one or three dimensions before being singly ionized by a 30 fs laser pulse centered at 800 nm. Theoretically, we address the behaviour of these three molecules. We consider the degree of alignment and orientation and model the angular dependence of the total ionization yield by molecular tunneling theory accounting for the Stark shift of the energy level of the ionizing orbital. For naphthalene and benzonitrile the orientational dependence of the ionization yield agrees well with the calculated results, in particular the observation that ionization is maximized when the probe laser is polarized along the most polarizable axis. For OCS the observation of maximum ionization yield when the probe is perpendicular to the intern...
Saffman-Taylor instability in yield stress fluids
Maleki-Jirsaraei, Nahid [Laboratoire de Physique Statistique, Ecole Normale Superieure, 24, Rue Lhomond, F-75231 Paris Cedex 05 (France); Complex Systems Laboratory, Physics Department, Azzahra University, Tehran (Iran, Islamic Republic of); Lindner, Anke [LMDH-PMMH, Ecole de Physique et Chimie de la Ville de Paris, 10 rue Vauquelin, 75231 Paris Cedex 05 (France); Rouhani, Shahin [Physics Department, Sharif University of Technology, Tehran (Iran, Islamic Republic of); Bonn, Daniel [Laboratoire de Physique Statistique, Ecole Normale Superieure, 24, Rue Lhomond, F-75231 Paris Cedex 05 (France); Van der Waals-Zeeman Instituut, Valckenierstraat 65, 1018 XE Amsterdam (Netherlands)
2005-04-13
Pushing a fluid with a less viscous one gives rise to the well known Saffman-Taylor instability. This instability is important in a wide variety of applications involving strongly non-Newtonian fluids that often exhibit a yield stress. Here we investigate the Saffmann-Taylor instability in this type of fluid, in longitudinal flows in Hele-Shaw cells. In particular, we study Darcy's law for yield stress fluids. The dispersion equation for the flow is similar to the equations obtained for ordinary viscous fluids but the viscous terms in the dimensionless numbers conditioning the instability now contain the yield stress. This also has repercussions on the wavelength of the instability as it follows from a linear stability analysis. As a consequence of the presence of yield stress, the wavelength of maximum growth is finite even at vanishing velocities. We study Darcy's law and the fingering patterns experimentally for a yield stress fluid in a linear Hele-Shaw cell. The results are in rather good agreement with the theoretical predictions. In addition we observe different regimes that lead to different morphologies of the fingering patterns, in both rectangular and circular Hele-Shaw cells.
Effect of density and planting pattern on yield and yield
alireza yadavi
2009-06-01
Full Text Available In order to evaluate competition ability of Grain maize (Zea mays L. against redroot pigweed (Amaranthus retroflexus L. a field experiment was conducted at Esfahan on 2003. In this research the effect of corn spatial arrangement on yield and yield components of corn (647 Three Way Cross hybrids under different levels of redroot pigweed infestation was investigated. Treatments were arranged in a factorial split experiment based on RCBD with three replications. Factorial arrangement of corn densities (74000 and 111000 plant ha-1 and planting patterns (single row, rectangular twin row and zigzag twin row formed the main plots. Split-plots referred to pigweed densities (0, 4, 8 and 12 plant m-1. Results showed that both grain and biological yield of corn increased as corn density rates increased but rows number per cob, number of grains per row of cob and 1000 grains weight decreased. The effects of planting arrangement on yield and yield components despite rows grain in cob, 1000 seeds weight and harvest index were statistically significant. Corn grain yield and yield components decreased significantly by increasing pigweed density. The effect of redroot pigweed density on corn grain and biological yield loss was predicted using Cousence hyperbolic yield equation. It showed that maximum grain yield loss and biological yield loss happened in single row arrangement and low corn density. Rows number per cob and grain numbers per row in higher corn density treatment showed lower reduction slopes under pigweed competition. In addition, grain rows numbers per cob and corn harvest index in twin arrangement treatments decreased lower than single row treatment under pigweed competition. The results of this research indicated that corn competition ability against redroot pigweed could be increased using dense population (1/5 fold of general density and zigzag twin row arrangement.
Marc Vanderhaeghen
2007-04-01
The theoretical issues in the interpretation of the precision measurements of the nucleon-to-Delta transition by means of electromagnetic probes are highlighted. The results of these measurements are confronted with the state-of-the-art calculations based on chiral effective-field theories (EFT), lattice QCD, large-Nc relations, perturbative QCD, and QCD-inspired models. The link of the nucleon-to-Delta form factors to generalized parton distributions (GPDs) is also discussed.
An Interval Maximum Entropy Method for Quadratic Programming Problem
RUI Wen-juan; CAO De-xin; SONG Xie-wu
2005-01-01
With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.
Hardness and yield strength of dentin from simulated nano-indentation tests.
Toparli, M; Koksal, N S
2005-03-01
The finite element method (FEM) is applied for studying the hardness (H) and yield strength (Y) of dentin subjected to a nano-indentation process. The nano-indentation experiments were simulated with the ABAQUS finite element software package. This test, performed with a spherical indenter, was simulated by axisymmetric finite element analysis. The load versus displacement was calculated during loading-unloading sequence for different elastic modulus (E) and yield strength. Hardness and maximum principal compressive and tensile stresses were plotted for different elastic modulus depending on yield strength. The dentin was assumed to be isotropic, homogenous and elasto-plastic. The theoretical results outlined in this study were compared with the experimental works reported in the literature and then hardness and yield strength of dentin was estimated.
Mikeš, Daniel
2010-05-01
Theoretical geology Present day geology is mostly empirical of nature. I claim that geology is by nature complex and that the empirical approach is bound to fail. Let's consider the input to be the set of ambient conditions and the output to be the sedimentary rock record. I claim that the output can only be deduced from the input if the relation from input to output be known. The fundamental question is therefore the following: Can one predict the output from the input or can one predict the behaviour of a sedimentary system? If one can, than the empirical/deductive method has changes, if one can't than that method is bound to fail. The fundamental problem to solve is therefore the following: How to predict the behaviour of a sedimentary system? It is interesting to observe that this question is never asked and many a study is conducted by the empirical/deductive method; it seems that the empirical method has been accepted as being appropriate without question. It is, however, easy to argument that a sedimentary system is by nature complex and that several input parameters vary at the same time and that they can create similar output in the rock record. It follows trivially from these first principles that in such a case the deductive solution cannot be unique. At the same time several geological methods depart precisely from the assumption, that one particular variable is the dictator/driver and that the others are constant, even though the data do not support such an assumption. The method of "sequence stratigraphy" is a typical example of such a dogma. It can be easily argued that all the interpretation resulting from a method that is built on uncertain or wrong assumptions is erroneous. Still, this method has survived for many years, nonwithstanding all the critics it has received. This is just one example of the present day geological world and is not unique. Even the alternative methods criticising sequence stratigraphy actually depart from the same
Joos, Georg
1986-01-01
Among the finest, most comprehensive treatments of theoretical physics ever written, this classic volume comprises a superb introduction to the main branches of the discipline and offers solid grounding for further research in a variety of fields. Students will find no better one-volume coverage of so many essential topics; moreover, since its first publication, the book has been substantially revised and updated with additional material on Bessel functions, spherical harmonics, superconductivity, elastomers, and other subjects.The first four chapters review mathematical topics needed by theo
Stöltzner, Michael
Answering to the double-faced influence of string theory on mathematical practice and rigour, the mathematical physicists Arthur Jaffe and Frank Quinn have contemplated the idea that there exists a `theoretical' mathematics (alongside `theoretical' physics) whose basic structures and results still require independent corroboration by mathematical proof. In this paper, I shall take the Jaffe-Quinn debate mainly as a problem of mathematical ontology and analyse it against the backdrop of two philosophical views that are appreciative towards informal mathematical development and conjectural results: Lakatos's methodology of proofs and refutations and John von Neumann's opportunistic reading of Hilbert's axiomatic method. The comparison of both approaches shows that mitigating Lakatos's falsificationism makes his insights about mathematical quasi-ontology more relevant to 20th century mathematics in which new structures are introduced by axiomatisation and not necessarily motivated by informal ancestors. The final section discusses the consequences of string theorists' claim to finality for the theory's mathematical make-up. I argue that ontological reductionism as advocated by particle physicists and the quest for mathematically deeper axioms do not necessarily lead to identical results.
Simulation Models of Leaf Area Index and Yield for Cotton Grown with Different Soil Conditioners.
Lijun Su
Full Text Available Simulation models of leaf area index (LAI and yield for cotton can provide a theoretical foundation for predicting future variations in yield. This paper analyses the increase in LAI and the relationships between LAI, dry matter, and yield for cotton under three soil conditioners near Korla, Xinjiang, China. Dynamic changes in cotton LAI were evaluated using modified logistic, Gaussian, modified Gaussian, log normal, and cubic polynomial models. Universal models for simulating the relative leaf area index (RLAI were established in which the application rate of soil conditioner was used to estimate the maximum LAI (LAIm. In addition, the relationships between LAIm and dry matter mass, yield, and the harvest index were investigated, and a simulation model for yield is proposed. A feasibility analysis of the models indicated that the cubic polynomial and Gaussian models were less accurate than the other three models for simulating increases in RLAI. Despite significant differences in LAIs under the type and amount of soil conditioner applied, LAIm could be described by aboveground dry matter using Michaelis-Menten kinetics. Moreover, the simulation model for cotton yield based on LAIm and the harvest index presented in this work provided important theoretical insights for improving water use efficiency in cotton cultivation and for identifying optimal application rates of soil conditioners.
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Theoretical Physics 1. Theoretical Mechanics
Dreizler, Reiner M.; Luedde, Cora S. [Frankfurt Univ. (Germany). Inst. fuer Theoretische Physik
2010-07-01
After an introduction to basic concepts of mechanics more advanced topics build the major part of this book. Interspersed is a discussion of selected problems of motion. This is followed by a concise treatment of the Lagrangian and the Hamiltonian formulation of mechanics, as well as a brief excursion on chaotic motion. The last chapter deals with applications of the Lagrangian formulation to specific systems (coupled oscillators, rotating coordinate systems, rigid bodies). The level of this textbook is advanced undergraduate. The authors combine teaching experience of more than 40 years in all fields of Theoretical Physics and related mathematical disciplines and thorough knowledge in creating advanced eLearning content. The text is accompanied by an extensive collection of online material, in which the possibilities of the electronic medium are fully exploited, e.g. in the form of applets, 2D- and 3D-animations. (orig.)
Barney G. Glaser, Ph.D., Hon. Ph.D.
2009-11-01
Full Text Available Theoretical sorting has brought the analyst to the point of pent-up pressure to write: to see the months of work actualized in a “piece.” But this is only a personal pressure. The goal of grounded theory methodology, above all is to offer the results to the public, usually through one or more publications. We will focus on writing for publication, which is the most frequent way that the analyst can tell how people are “buying” what really matters in sociology, or in other fields.Both feedback on and use of publications will be the best evaluation of the analyst’s grounded theory. It will be his main source or criticism, constructive critique, and frequently of career rewards. In any case, he has to write to expand his audience beyond the limited number of close colleagues and students. Unless there is a publication, his work will be relegated to limited discussion, classroom presentation, or even private fantasy. The rigor and value of grounded theory work deserves publication. And many analysts have a stake in effecting wider publics, which makes their substantive grounded theory count.
Borkowski Andrzej
2015-12-01
Full Text Available The paper presents a summary of research activities concerning theoretical geodesy performed in Poland in the period of 2011-2014. It contains the results of research on new methods of the parameter estimation, a study on robustness properties of the M-estimation, control network and deformation analysis, and geodetic time series analysis. The main achievements in the geodetic parameter estimation involve a new model of the M-estimation with probabilistic models of geodetic observations, a new Shift-Msplit estimation, which allows to estimate a vector of parameter differences and the Shift-Msplit(+ that is a generalisation of Shift-Msplit estimation if the design matrix A of a functional model has not a full column rank. The new algorithms of the coordinates conversion between the Cartesian and geodetic coordinates, both on the rotational and triaxial ellipsoid can be mentioned as a highlights of the research of the last four years. New parameter estimation models developed have been adopted and successfully applied to the control network and deformation analysis.
Borkowski, Andrzej; Kosek, Wiesław
2015-12-01
The paper presents a summary of research activities concerning theoretical geodesy performed in Poland in the period of 2011-2014. It contains the results of research on new methods of the parameter estimation, a study on robustness properties of the M-estimation, control network and deformation analysis, and geodetic time series analysis. The main achievements in the geodetic parameter estimation involve a new model of the M-estimation with probabilistic models of geodetic observations, a new Shift-Msplit estimation, which allows to estimate a vector of parameter differences and the Shift-Msplit(+) that is a generalisation of Shift-Msplit estimation if the design matrix A of a functional model has not a full column rank. The new algorithms of the coordinates conversion between the Cartesian and geodetic coordinates, both on the rotational and triaxial ellipsoid can be mentioned as a highlights of the research of the last four years. New parameter estimation models developed have been adopted and successfully applied to the control network and deformation analysis. New algorithms based on the wavelet, Fourier and Hilbert transforms were applied to find time-frequency characteristics of geodetic and geophysical time series as well as time-frequency relations between them. Statistical properties of these time series are also presented using different statistical tests as well as 2nd, 3rd and 4th moments about the mean. The new forecasts methods are presented which enable prediction of the considered time series in different frequency bands.
... Program Division of Reproductive Health More CDC Sites Low-Yield Cigarettes Recommend on Facebook Tweet Share Compartir ... they compensate when smoking them. Smokers Who Use Low-Yield Cigarettes Many smokers consider smoking low-yield ...
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Andrey A. Toropov
2001-06-01
Full Text Available The enthalpy of formation of a set of 60 hydroarbons is calculated on the basis of topological descriptors defined from the distance and detour matrices within the realm of the QSAR/QSPR theory. Linear and non-linear polynomials fittings are made and results show the need to resort to higher-order regression equations in order to get better concordances between theoretical results and experimental available data. Besides, topological indices computed from maximum order distances seems to yield rather satisfactory predictions of heats of formation for hydrocarbons.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Theoretical Mechanics Theoretical Physics 1
Dreizler, Reiner M
2011-01-01
After an introduction to basic concepts of mechanics more advanced topics build the major part of this book. Interspersed is a discussion of selected problems of motion. This is followed by a concise treatment of the Lagrangian and the Hamiltonian formulation of mechanics, as well as a brief excursion on chaotic motion. The last chapter deals with applications of the Lagrangian formulation to specific systems (coupled oscillators, rotating coordinate systems, rigid bodies). The level of this textbook is advanced undergraduate. The authors combine teaching experience of more than 40 years in all fields of Theoretical Physics and related mathematical disciplines and thorough knowledge in creating advanced eLearning content. The text is accompanied by an extensive collection of online material, in which the possibilities of the electronic medium are fully exploited, e.g. in the form of applets, 2D- and 3D-animations. - A collection of 74 problems with detailed step-by-step guidance towards the solutions. - A col...
Liang Tao; Jia Xinzhang
2012-01-01
The problem of yield estimation merely from performance test data of qualified semiconductor devices is studied.An empirical formula is presented to calculate the yield directly by the sample mean and standard deviation of singly truncated normal samples based on the theoretical relation between process capability indices and the yield.Firstly,we compare four commonly used normality tests under different conditions,and simulation results show that the Shapiro-Wilk test is the most powerful test in recognizing singly truncated normal samples.Secondly,the maximum likelihood estimation method and the empirical formula are compared by Monte Carlo simulation.The results show that the simple empirical formulas can achieve almost the same accuracy as the maximum likelihood estimation method but with a much lower amount of calculations when estimating yield from singly truncated normal samples.In addition,the empirical formula can also be used for doubly truncated normal samples when some specific conditions are met.Practical examples of yield estimation from academic and IC test data are given to verify the effectiveness of the proposed method.
Efficient prediction of (p,n) yields
Swift, D C; McNaney, J M; Higginson, D P; Beg, F
2009-09-09
In the continuous deceleration approximation, charged particles decelerate without any spread in energy as they traverse matter. This approximation simplifies the calculation of the yield of nuclear reactions, for which the cross-section depends on the particle energy. We calculated (p,n) yields for a LiF target, using the Bethe-Bloch relation for proton deceleration, and predicted that the maximum yield would be around 0.25% neutrons per incident proton, for an initial proton energy of 70 MeV or higher. Yield-energy relations calculated in this way can readily be used to optimize source and (p,n) converter characteristics.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Bauer, Alexander; Bösch, Peter; Friedl, Anton; Amon, Thomas
2009-06-01
Agrarian biomass as a renewable energy source can contribute to a considerable CO(2) reduction. The overriding goal of the European Union is to cut energy consumption related greenhouse gas emission in the EU by 20% until the year 2020. This publication aims at optimising the methane production from steam-exploded wheat straw and presents a theoretical estimation of the ethanol and methane potential of straw. For this purpose, wheat straw was pretreated by steam explosion using different time/temperature combinations. Specific methane yields were analyzed according to VDI 4630. Pretreatment of wheat straw by steam explosion significantly increased the methane yield from anaerobic digestion by up to 20% or a maximum of 331 l(N)kg(-1) VS compared to untreated wheat straw. Furthermore, the residual anaerobic digestion potential of methane after ethanol fermentation was determined by enzymatic hydrolysis of pretreated wheat straw using cellulase. Based on the resulting glucose concentration the ethanol yield and the residual sugar available for methane production were calculated. The theoretical maximum ethanol yield of wheat straw was estimated to be 0.249 kg kg(-1) dry matter. The achievable maximum ethanol yield per kg wheat straw dry matter pretreated by steam explosion and enzymatic hydrolysis was estimated to be 0.200 kg under pretreatment conditions of 200 degrees C and 10 min corresponding to 80% of the theoretical maximum. The residual methane yield from straw stillage was estimated to be 183 l(N)kg(-1) wheat straw dry matter. Based on the presented experimental data, a concept is proposed that processes wheat straw for ethanol and methane production. The concept of an energy supply system that provides more than two forms of energy is met by (1) upgrading obtained ethanol to fuel-grade quality and providing methane to CHP plants for the production of (2) electric energy and (3) utility steam that in turn can be used to operate distillation columns in the
Jin Seop Bak
2014-12-01
Full Text Available In order to overcome the limitation of commercial electron beam irradiation (EBI, lignocellulosic rice straw (RS was pretreated using water soaking-based electron beam irradiation (WEBI. This environment-friendly pretreatment, without the formation (or release of inhibitory compounds (especially hydroxymethylfurfural and furfural, significantly increased the enzymatic hydrolysis and fermentation yields of RS. Specifically, when water-soaked RS (solid:liquid ratio of 100% was treated with WEBI doses of 1 MeV at 80 kGy, 0.12 mA, the glucose yield after 120 h of hydrolysis was 70.4% of the theoretical maximum. This value was predominantly higher than the 29.5% and 52.1% measured from untreated and EBI-treated RS, respectively. Furthermore, after simultaneous saccharification and fermentation for 48 h, the ethanol concentration, production yield, and productivity were 9.3 g/L, 57.0% of the theoretical maximum, and 0.19 g/L h, respectively. Finally, scanning electron microscopy images revealed that WEBI induced significant ultrastructural changes to the surface of lignocellulosic fibers.
Shijian YUAN; Dazhi XIAO; Zhubin HE
2004-01-01
A generalized yield criterion is proposed based on the metal plastic deformation mechanics and the fundamental formula in theory of plasticity. Using the generalized yield criterion, the reason is explained that Mises yield criterion and Tresca yield criterion do not completely match with experimental data. It has been shown that the yield criteria of ductile metals depend not only on the quadratic invariant of the deviatoric stress tensor J2, but also on the cubic invariant of the deviatoric stress tensor J3 and the ratio of the yield stress in pure shear to the yield stress in uniaxial tension k/σs. The reason that Mises yield criterion and Tresca yield criterion are not in good agreement with the experimental data is that the effect of J3 and k/σs is neglected.
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Soft photon yield in nuclear interactions
Kokoulina, E
2015-01-01
First results of study of a soft photon yield at Nuclotron (LHEP, JINR) in nucleus-nucleus collisions at 3.5 GeV per nucleon are presented. These photons are registered by an BGO electromagnetic calorimeter built by SVD-2 Collaboration. The obtained spectra confirm the excessive yield in the energy region less than 50 MeV in comparison with theoretical estimations and agree with previous experiments at high-energy interactions.
Soft photon yield in nuclear interactions
Kokoulina, E.
2016-01-01
First results of the study of a soft photon yield in nucleus-nucleus collisions at 3.5 GeV per nucleon at Nuclotron (LHEP, JINR) are presented. These photons are registered by an BGO electromagnetic calorimeter built by the SVD-2 Collaboration. The obtained spectra confirm the excessive yield in the energy region less than 50 MeV in comparison with theoretical estimations and agree with previous experiments at high-energy interactions.
Soft photon yield in nuclear interactions
Kokoulina E.
2016-01-01
Full Text Available First results of the study of a soft photon yield in nucleus-nucleus collisions at 3.5 GeV per nucleon at Nuclotron (LHEP, JINR are presented. These photons are registered by an BGO electromagnetic calorimeter built by the SVD-2 Collaboration. The obtained spectra confirm the excessive yield in the energy region less than 50 MeV in comparison with theoretical estimations and agree with previous experiments at high-energy interactions.
Essays in petroleum futures market, convenience yield, and long memory
Mazaheri, Ataollah
This thesis is a collection of three essays which address some empirical applications of long memory processes with specific interest in financial economics of energy futures market. The first essay 'Evidence of Long Memory in the Petroleum Market' studies evidence of long memory in the energy market using daily and weekly futures data. This essay concentrates on the question of interdependence between crude oil futures and the corresponding products. The empirical results provide strong support for long memory in the energy futures market. The cointegrating relations between crude oil and heating oil futures as well as crude oil and unleaded gasoline futures exhibit long memory, whereas the individual series are unit-root. The second essay 'Convenience Yield, Mean Reversion and Long Memory in the Petroleum Market' analyzes convenience yields in the petroleum market. The focus of this essay is the behavior of the spot and futures prices over the long run. The implied convenience yield for petroleum and petroleum products is found to be driven by a nonstationary and mean reverting long memory process. The theoretical implication of this finding is established. It is discussed that this might be attributed to the fact that the market is expecting mean reversion in the spot prices. Furthermore, the volatility process and its relation with the mean process and the corresponding direction of causality have been studied in detail. The third essay 'Long Memory and Conditional Heteroskedasticity, A Monte Carlo Investigation', unlike the first two, looks at the econometrics of the estimators of the long memory process. It evaluates performance of three methods of estimating the parameter of fractionally integrated noise: the exact maximum likelihood estimator (MLE), the quasi maximum likelihood estimator (QMLE), and the GPH under different realizations for variance.
N-acetylcysteine increased rice yield
NOZULAIDI, MOHD; JAHAN, MD SARWAR; KHAIRI, MOHD; Khandaker, Mohammad Moneruzzaman; Mat NASHRIYAH; KHANIF, YUSOP MOHD
2015-01-01
N-acetylcysteine (NAC) biosynthesized reduced glutathione (GSH), which maintains redox homeostasis in plants under normal and stressful conditions. To justify the effects of NAC on rice production, we measured yield parameters, chlorophyll (Chl) content, minimum Chl fluorescence (Fo), maximum Chl fluorescence (Fm), quantum yield (Fv/Fm), net photosynthesis rate (Pn), photosynthetically active radiation (PAR), and relative water content (RWC). Four treatments, N1G0 (nitrogen (N) with no NAC), ...
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Felipe Schneider
2010-06-01
Full Text Available The experiment was carried out in west of Paraná state, in Red Eutroferric Latosoil. The objective was to verify, in the establishment, the available P concentration in soil and critical doses of P to yield of dry matter (DM and tillering and, in the 2° year, the growth of Panicum maximum cvs. Mombaça and Tanzânia-1 and Brachiaria sp. hibrid Mulato. The treatments were three forages and five P2O5 rates (0, 40, 80, 120 e 240 kg/ha randomized in three complete blocks in factorial arrange. The phosphorus rates linearly increased the P available in soil extracted by Mehlich-1 method (ŷ =-4,5136 + 1,0241X, R2=0,96, ŷ, in mg/dm3. The P application increased, up to maximum, the DM yield of forages Mombaça (ŷ=6.472 + 74,41X – 0,241X2 R2=0,97, Tanzânia-1 (ŷ =6.923 + 70,95X – 0,249X2, R2=0,88 and Mulato (ŷ =7.393 + 94,42X – 0,341X2, R2=0,72 and the tiller density (TD. The critical phosphorus rates of 54, 44 e 48kg/ha of P2O5, respectively, to Mombaça, Tanzânia-1 and Mulato and P critical concentrations in soil of 51, 41 and 44mg/dm3. In the establishment, the mulato-grass presented highers DM yield and TD (11.169kg/ha and 69 tillers/0,25m2. The DM yield and TD in the mombaça-grass (9.787kg/ha and 54 perfilhos/0,25m2 and the tanzania-grass (9.563kg/ha and 52 perfilhos/0,25m2 were equal. In the 2° year, there were no variations in DM yield. The highest leaf elogantion ratio (LER and leaf appearance ratio (LAR were obtained in mombaça-grass and mulato-grass, respectively. The mulato-grass presented lower phylocron.O experimento foi conduzido na região Oeste do Paraná, em Latossolo Vermelho Eutroférrico de textura argilosa. O objetivo foi determinar, no estabelecimento, os teores críticos de P disponível e as doses críticas para produção de matéria seca (PMS e perfilhamento e, no 2° ano, o crescimento de Panicum maximum cvs. Mombaça e Tanzânia-1 e Brachiaria sp. híbrida Mulato. Os tratamentos foram: três forrageiras e cinco
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Low yield sputtering of monocrystalline metals
Veen, A. van; Fluit, J.M.
1980-01-01
Sputtering of monocrystalline metals by light noble gas ions is studied experimentally and theoretically at low primary ion energy. Evidence is found for a multiple collision process in which surface atoms are sputtered by backscattered ions. The introduction of the maximum recoil energy EM in the s
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
Seeking maximum linearity of transfer functions
Silva, Filipi N.; Comin, Cesar H.; Costa, Luciano da F.
2016-12-01
Linearity is an important and frequently sought property in electronics and instrumentation. Here, we report a method capable of, given a transfer function (theoretical or derived from some real system), identifying the respective most linear region of operation with a fixed width. This methodology, which is based on least squares regression and systematic consideration of all possible regions, has been illustrated with respect to both an analytical (sigmoid transfer function) and a simple situation involving experimental data of a low-power, one-stage class A transistor current amplifier. Such an approach, which has been addressed in terms of transfer functions derived from experimentally obtained characteristic surface, also yielded contributions such as the estimation of local constants of the device, as opposed to typically considered average values. The reported method and results pave the way to several further applications in other types of devices and systems, intelligent control operation, and other areas such as identifying regions of power law behavior.
Seeking Maximum Linearity of Transfer Functions
Silva, Filipi N; Costa, Luciano da F
2016-01-01
Linearity is an important and frequently sought property in electronics and instrumentation. Here, we report a method capable of, given a transfer function, identifying the respective most linear region of operation with a fixed width. This methodology, which is based on least squares regression and systematic consideration of all possible regions, has been illustrated with respect to both an analytical (sigmoid transfer function) and real-world (low-power, one-stage class A transistor amplifier) situations. In the former case, the method was found to identity the theoretically optimal region of operation even in presence of noise. In the latter case, it was possible to identify an amplifier circuit configuration providing a good compromise between linearity, amplification and output resistance. The transistor amplifier application, which was addressed in terms of transfer functions derived from its experimentally obtained characteristic surface, also yielded contributions such as the estimation of local cons...
Yield stress fluids slowly yield to analysis
Bonn, D.; Denn, M.M.
2009-01-01
We are surrounded in everyday life by yield stress fluids: materials that behave as solids under small stresses but flow like liquids beyond a critical stress. For example, paint must flow under the brush, but remain fixed in a vertical film despite the force of gravity. Food products (such as mayon
Analysis on Wheat Yield in China Based on the Prediction of Yield Potential
2011-01-01
The maximum yield growth range of wheat yield per unit in China is analyzed from three aspects including photosynthesis production potential of wheat,the changing trend of per unit wheat in the previous years and potential of distribution area agricultural crops.In the paper,the potential of using light,the external potential of historical yield evolution tend and AEZ (agricultural ecological zone) are applied to calculate the per unit yield potential of Chinese wheat.The results assume that the maximum growth range of per unit yield in different stages was different:before 1991,the growth range was 10%;before 1996,the growth range was 9%;before 2000,the growth range was 8%.Any variety of wheat and planting technology higher than the above growth range can only be promoted in restricted area and has the statistical error.The results are of reference significance to Chinese wheat production.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Maximizing oil yields may not optimize economics
1987-03-01
The Los Alamos National Laboratory has used the ASPEN computer code to calculate the economics of different hydroretorting conditions. When the oil yield was maximized and a oil shale plant designed around this process, the costs turned out much higher than expected. However, calculations based on runs of less than maximum yields showed lower cost estimates. It is recommended that future efforts should be concentrated on minimizing production costs rather than maximizing yields. An oil shale plant has been designed around minimum production cost, but has not been able to be tested experimentally.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
1990-05-01
Paz, Peru-Bolivia border LPS 50,27,3 -0.071±0.036 -89.161942 14.292222 La Palma , Quatemala LUB 40,30,3 0.214±0.037 -101.866669 33.583332 Lubbock...Forrestal Building 1000 Independence Avenue Washington, DC 20585 Mr. Jeff Duncan Dr. W.H.K. Lee Office of Congressman Markey Office of Earthquakes, Volcanoes
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Optimizing rice yields while minimizing yield-scaled global warming potential.
Pittelkow, Cameron M; Adviento-Borbe, Maria A; van Kessel, Chris; Hill, James E; Linquist, Bruce A
2014-05-01
To meet growing global food demand with limited land and reduced environmental impact, agricultural greenhouse gas (GHG) emissions are increasingly evaluated with respect to crop productivity, i.e., on a yield-scaled as opposed to area basis. Here, we compiled available field data on CH4 and N2 O emissions from rice production systems to test the hypothesis that in response to fertilizer nitrogen (N) addition, yield-scaled global warming potential (GWP) will be minimized at N rates that maximize yields. Within each study, yield N surplus was calculated to estimate deficit or excess N application rates with respect to the optimal N rate (defined as the N rate at which maximum yield was achieved). Relationships between yield N surplus and GHG emissions were assessed using linear and nonlinear mixed-effects models. Results indicate that yields increased in response to increasing N surplus when moving from deficit to optimal N rates. At N rates contributing to a yield N surplus, N2 O and yield-scaled N2 O emissions increased exponentially. In contrast, CH4 emissions were not impacted by N inputs. Accordingly, yield-scaled CH4 emissions decreased with N addition. Overall, yield-scaled GWP was minimized at optimal N rates, decreasing by 21% compared to treatments without N addition. These results are unique compared to aerobic cropping systems in which N2 O emissions are the primary contributor to GWP, meaning yield-scaled GWP may not necessarily decrease for aerobic crops when yields are optimized by N fertilizer addition. Balancing gains in agricultural productivity with climate change concerns, this work supports the concept that high rice yields can be achieved with minimal yield-scaled GWP through optimal N application rates. Moreover, additional improvements in N use efficiency may further reduce yield-scaled GWP, thereby strengthening the economic and environmental sustainability of rice systems.
Statistical modelling and deconvolution of yield meter data
Tøgersen, Frede Aakmann; Waagepetersen, Rasmus Plenge
previously harvested along the swath. The unobserved yield is assumed to be a Gaussian random field and the yield monitoring system data is modelled as a convolution of the yield and an impulse response function. This results in an unusual spatial covariance structure (depending on the driving pattern......Data for yield maps can be obtained from modern combine harvesters equipped with a differential global positioning system and a yield monitoring system. Due to delay and smoothing effects in the combine harvester the recorded yield data for a location represents a shifted weighted average of yield...... of the combine harvester) for the yield monitoring system data. Parameters of the impulse response function and the spatial covariance function of the yield are estimated using maximum likelihood. The fitted model is assessed using certain empirical directional covariograms and the yield is finally predicted...
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Ramos, K. J.; Bahr, D. F.; Hooks, D. E.
2011-03-01
The onset of plastic deformation was investigated using nanoindentation in single crystals of the explosive cyclotrimethylene trinitramine (RDX). Cleavage and habit planes were tested revealing a range of yielding behaviors. Smooth habit planes of unprocessed single crystals exhibited distinct yield points near the theoretical shear strength; planes produced by cleavage yielded at lower applied stresses. Cumulative probability distributions of maximum shear stresses at yield were used to illustrate the representative yielding behavior for samples prepared by the different methods. A statistically significant difference was observed for cleavage and habit planes. This suggested that structural defects, such as dislocations from growth and sample preparation, were being probed and nanoindentation can be used to correlate the mechanical response of organic molecular crystals with defect density. This capability may help explain the observed range of measurement differences in fundamental properties in this class of materials, such as sensitivity to the initiation of detonation in explosives, and disparate tablet integrity and stability responses in polymorphs of some pharmaceutical materials.
On the maximum grain size entrained by photoevaporative winds
Hutchison, Mark A; Maddison, Sarah T
2016-01-01
We model the behaviour of dust grains entrained by photoevaporation-driven winds from protoplanetary discs assuming a non-rotating, plane-parallel disc. We obtain an analytic expression for the maximum entrainable grain size in extreme-UV radiation-driven winds, which we demonstrate to be proportional to the mass loss rate of the disc. When compared with our hydrodynamic simulations, the model reproduces almost all of the wind properties for the gas and dust. In typical turbulent discs, the entrained grain sizes in the wind are smaller than the theoretical maximum everywhere but the inner disc due to dust settling.
Modified maximum likelihood registration based on information fusion
Yongqing Qi; Zhongliang Jing; Shiqiang Hu
2007-01-01
The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.
A discussion on maximum entropy production and information theory
Bruers, Stijn [Instituut voor Theoretische Fysica, Celestijnenlaan 200D, Katholieke Universiteit Leuven, B-3001 Leuven (Belgium)
2007-07-06
We will discuss the maximum entropy production (MaxEP) principle based on Jaynes' information theoretical arguments, as was done by Dewar (2003 J. Phys. A: Math. Gen. 36 631-41, 2005 J. Phys. A: Math. Gen. 38 371-81). With the help of a simple mathematical model of a non-equilibrium system, we will show how to derive minimum and maximum entropy production. Furthermore, the model will help us to clarify some confusing points and to see differences between some MaxEP studies in the literature.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Theoretical Study of a Spherical Plasma Focus
Ay, Yasar
, electrodynamics, thermodynamics, and radiations emitted from the plasma focus, the discharge current wave form has been used to validate the model. A good agreement has been achieved between theoretical calculation and the experimental measurement of a similar spherical plasma focus device. The snowplow model with the help of the shock wave equations coupled to the circuit equations is used to predict the plasma and shock wave parameters by using the momentum and magnetic force equations. While these equations are used in the phases of the rundown phase I and II, the reflected shock phase with the necessary modification of the magnetic field calculation, and the constant reflected shock front velocity; the radiative phase additionally includes the effect of the radiations emitted from the plasma column (Bremsstrahlung, line and radiative recombination), and the joule heating with the plasma resistance. Neutron yield and the ion properties are calculated in the radiative phase. The parameters for the spherical plasma focus are 8.0 and 14.5 cm inner and outer electrode radii, respectively, 432 muF capacitor bank, 25 kV charging voltage, and 14.5 Torr DT gas pressure. A high discharge current of about 1.5 MA, a high neutron yield of 1.13 x 1013 neutrons, and a high plasma column-ion density of 1.61 x 1024 m --3 are achieved with the given parameters. The developed model is also used to investigate the effect of the gas pressure, discharge voltage, and the molecular mass of the gas on the maximum plasma temperature and pinch start time. It is found that the maximum plasma temperature can be obtained with a relatively shorter pinch start time using a relatively heavier gas with lower gas pressure and higher discharge voltage.
department of Agricultural Engineering, University of Ibadan, Nigeria. (Received 28 ... properties, growth and shoot yield of large-green leafy amaranth (Amaranth sp.). Soil moisture ... microorganisms which stimulate the physical processes ... to plants and, consequently, crop establishment ... sustainable soil structure.
evaluate new interspecific genotypes for intensified double cropping of irrigated rice. The experimental ... the performance of the new irrigated .... nursing at a spacing of 20 cm between plants ..... if new technologies, comprising high yielding.
Smoothed log-concave maximum likelihood estimation with applications
Chen, Yining
2011-01-01
We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.
Mid-infrared predictions of cheese yield from bovine milk
Vanlierde, Amélie; Soyeurt, Hélène; Anceau, Christine; Vanden Bossche, sandrine; Dehareng, Frédéric; Pierre DARDENNE; Gengler, Nicolas; Sindic, Marianne; Colinet, Frédéric
2011-01-01
Economically, cheese yield (CY) is very important. Todate, empirical or theoretical formulae allow estimating the theoretical CY from milk fat and casein or protein content of milk. It would be interesting to predict CY during milk recording directly without the need to estimate milk components. Through the BlueSel project, 157 milk samples were collected in Wallonia from individual cows and analyzed using a mid-infrared (MIR) MilkoScanFT6000 spectrometer. Individual laboratory cheese yields ...
Maximum organic loading rate for the single-stage wet anaerobic digestion of food waste.
Nagao, Norio; Tajima, Nobuyuki; Kawai, Minako; Niwa, Chiaki; Kurosawa, Norio; Matsuyama, Tatsushi; Yusoff, Fatimah Md; Toda, Tatsuki
2012-08-01
Anaerobic digestion of food waste was conducted at high OLR from 3.7 to 12.9 kg-VS m(-3) day(-1) for 225 days. Periods without organic loading were arranged between the each loading period. Stable operation at an OLR of 9.2 kg-VS (15.0 kg-COD) m(-3) day(-1) was achieved with a high VS reduction (91.8%) and high methane yield (455 mL g-VS-1). The cell density increased in the periods without organic loading, and reached to 10.9×10(10) cells mL(-1) on day 187, which was around 15 times higher than that of the seed sludge. There was a significant correlation between OLR and saturated TSS in the sludge (y=17.3e(0.1679×), r(2)=0.996, P<0.05). A theoretical maximum OLR of 10.5 kg-VS (17.0 kg-COD) m(-3) day(-1) was obtained for mesophilic single-stage wet anaerobic digestion that is able to maintain a stable operation with high methane yield and VS reduction.
Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains
Erik Van der Straeten
2009-11-01
Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.
Crop Yield Forecasted Model Based on Time Series Techniques
Li Hong-ying; Hou Yan-lin; Zhou Yong-juan; Zhao Hui-ming
2012-01-01
Traditional studies on potential yield mainly referred to attainable yield： the maximum yield which could be reached by a crop in a given environment. The new concept of crop yield under average climate conditions was defined in this paper, which was affected by advancement of science and technology. Based on the new concept of crop yield, the time series techniques relying on past yield data was employed to set up a forecasting model. The model was tested by using average grain yields of Liaoning Province in China from 1949 to 2005. The testing combined dynamic n-choosing and micro tendency rectification, and an average forecasting error was 1.24%. In the trend line of yield change, and then a yield turning point might occur, in which case the inflexion model was used to solve the problem of yield turn point.
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Roy Choudhury, Kingshuk; O'Sullivan, Finbarr; Kasman, Ian; Plowman, Greg D
2012-12-20
Measurements in tumor growth experiments are stopped once the tumor volume exceeds a preset threshold: a mechanism we term volume endpoint censoring. We argue that this type of censoring is informative. Further, least squares (LS) parameter estimates are shown to suffer a bias in a general parametric model for tumor growth with an independent and identically distributed measurement error, both theoretically and in simulation experiments. In a linear growth model, the magnitude of bias in the LS growth rate estimate increases with the growth rate and the standard deviation of measurement error. We propose a conditional maximum likelihood estimation procedure, which is shown both theoretically and in simulation experiments to yield approximately unbiased parameter estimates in linear and quadratic growth models. Both LS and maximum likelihood estimators have similar variance characteristics. In simulation studies, these properties appear to extend to the case of moderately dependent measurement error. The methodology is illustrated by application to a tumor growth study for an ovarian cancer cell line.
Ulrich, Clara; Vermard, Youen; Dolder, Paul J.
2016-01-01
ranges to combine long-term single-stock targets with flexible, short-term, mixed-fisheries management requirements applied to the main North Sea demersal stocks. It is shown that sustained fishing at the upper bound of the range may lead to unacceptable risks when technical interactions occur....... An objective method is suggested that provides an optimal set of fishing mortality within the range, minimizing the risk of total allowable catch mismatches among stocks captured within mixed fisheries, and addressing explicitly the trade-offs between the most and least productive stocks....
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Maximum-likelihood estimation prevents unphysical Mueller matrices
Aiello, A; Voigt, D; Woerdman, J P
2005-01-01
We show that the method of maximum-likelihood estimation, recently introduced in the context of quantum process tomography, can be applied to the determination of Mueller matrices characterizing the polarization properties of classical optical systems. Contrary to linear reconstruction algorithms, the proposed method yields physically acceptable Mueller matrices even in presence of uncontrolled experimental errors. We illustrate the method on the case of an unphysical measured Mueller matrix taken from the literature.
The effects of the regulated deficit irrigation on yield and some yield ...
user
2011-05-16
May 16, 2011 ... phases: (1) Vegetative stage (V); from seed germination to the beginning of flowering and (2) .... recorded maximum and the minimum temperatures were 32.7 and. 11.7°C in ..... Quality and yield response of soybean (Glycine.
Subsequent yield loci of 5754O aluminum alloy sheet
WANG Hai-bo; WAN Min; WU Xiang-dong; YAN Yu
2009-01-01
Complex loading paths were realized with cruciform specimens and biaxial loading testing machine. Experimental method for determining the subsequent yield locus of sheet metal was established. With this method, the subsequent yield loci of 5754O aluminum alloy sheet were obtained under complex loading paths. Theoretical subsequent yield loci based on Yld2000-2d yield criterion and three kinds of hardening modes were calculated and compared with the experimental results. The results show that the theoretical subsequent yield loci based on mixed hardening mode describe the experimental subsequent yield loci well, whereas isotropic hardening mode, which is widely used in sheet metal forming fields, predicts values larger than the experimental results. Kinematic hardening mode predicts values smaller than the experimental results and its errors are the largest.
Production yield analysis in the poultry processing industry
Somsen, D.J.; Capelle, A.; Tramper, J.
2004-01-01
The paper outlines a case study where the PYA-method (production yield analysis) was implemented at a poultry-slaughtering line, processing 9000 broiler chicks per hour. It was shown that the average live weight of a flock of broilers could be used to predict the maximum production yield of the part
Combined Nucleosynthetic Yields of Multiple First Stars
Chan, Conrad
2016-01-01
Modern numerical simulations of the formation of the first stars predict that the first stars formed in multiples. In those cases, the chemical yields of multiple supernova explosions may have contributed to the formation of a next generation star. We match the chemical abundances of the oldest observed stars in the universe to a database of theoretical supernova models, to show that it is likely that the first stars formed from the ashes of two or more progenitors.
Zhu, Xinna; Tan, Zaigao; Xu, Hongtao; Chen, Jing; Tang, Jinlei; Zhang, Xueli
2014-07-01
Reducing equivalents are an important cofactor for efficient synthesis of target products. During metabolic evolution to improve succinate production in Escherichia coli strains, two reducing equivalent-conserving pathways were activated to increase succinate yield. The sensitivity of pyruvate dehydrogenase to NADH inhibition was eliminated by three nucleotide mutations in the lpdA gene. Pyruvate dehydrogenase activity increased under anaerobic conditions, which provided additional NADH. The pentose phosphate pathway and transhydrogenase were activated by increased activities of transketolase and soluble transhydrogenase SthA. These data suggest that more carbon flux went through the pentose phosphate pathway, thus leading to production of more reducing equivalent in the form of NADPH, which was then converted to NADH through soluble transhydrogenase for succinate production. Reverse metabolic engineering was further performed in a parent strain, which was not metabolically evolved, to verify the effects of activating these two reducing equivalent-conserving pathways for improving succinate yield. Activating pyruvate dehydrogenase increased succinate yield from 1.12 to 1.31mol/mol, whereas activating the pentose phosphate pathway and transhydrogenase increased succinate yield from 1.12 to 1.33mol/mol. Activating these two pathways in combination led to a succinate yield of 1.5mol/mol (88% of theoretical maximum), suggesting that they exhibited a synergistic effect for improving succinate yield.
Yu, Y.
2016-01-01
Yu, Y.
2016-01-01
{tau}{yields}{omega}3{pi}{nu} decays
Gao, J.; Li, B.A. [Kentucky Univ., Lexington, KY (United States). Dept. of Physics and Astronomy
2001-11-01
A theoretical study of the anomalous decay mode {tau}{yields}{omega}{pi}{pi}{pi}{nu} is presented. The theoretical value of the branching ratio of {tau}{sup -}{yields}{omega}{pi}{sup -}{pi}{sup 0}{pi}{sup 0}{nu} agrees well with the data. The branching ratio of {tau}{sup -}{yields}{omega}{pi}{sup +}{pi}{sup -}{pi}{sup -}{nu}{sub {tau}} is predicted. It is found that the vertices of a{sub 1}{rho}{pi} and {omega}{rho}{pi} play a dominant role in these two decay modes. CVC is satisfied, and there is no adjustable parameter. (orig.)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
Bredenstein, A.
2006-05-08
In this work we provide precision calculations for the processes {gamma}{gamma} {yields} 4 fermions and H {yields} WW/ZZ {yields} 4 fermions. At a {gamma}{gamma} collider precise theoretical predictions are needed for the {gamma}{gamma} {yields} WW {yields} 4f processes because of their large cross section. These processes allow a measurement of the gauge-boson couplings {gamma}WW and {gamma}{gamma}WW. Furthermore, the reaction {gamma}{gamma} {yields} H {yields} WW/ZZ {yields} 4f arises through loops of virtual charged, massive particles. Thus, the coupling {gamma}{gamma}H can be measured and Higgs bosons with a relatively large mass could be produced. For masses M{sub H} >or(sim) 135 GeV the Higgs boson predominantly decays into W- or Z-boson pairs and subsequently into four leptons. The kinematical reconstruction of these decays is influenced by quantum corrections, especially real photon radiation. Since off-shell effects of the gauge bosons have to be taken into account below M{sub H} {approx} 2M{sub W/Z}, the inclusion of the decays of the gauge bosons is important. In addition, the spin and the CP properties of the Higgs boson can be determined by considering angular and energy distributions of the decay fermions. For a comparison of theoretical predictions with experimental data Monte Carlo generators are useful tools. We construct such programs for the processes {gamma}{gamma} {yields} WW {yields} 4f and H {yields} WW/ZZ {yields} 4f. On the one hand, they provide the complete predictions at lowest order of perturbation theory. On the other hand, they contain quantum corrections, which ca be classified into real corrections, connected with photons bremsstrahlung, and virtual corrections. Whereas the virtual quantum corrections to {gamma}{gamma} {yields} WW {yields} 4f are calculated in the double-pole approximation, i.e. only doubly-resonant contributions are taken into account, we calculate the complete O({alpha}) corrections for the H {yields} WW
Estimating Corporate Yield Curves
Antionio Diaz; Frank Skinner
2001-01-01
This paper represents the first study of retail deposit spreads of UK financial institutions using stochastic interest rate modelling and the market comparable approach. By replicating quoted fixed deposit rates using the Black Derman and Toy (1990) stochastic interest rate model, we find that the spread between fixed and variable rates of interest can be modeled (and priced) using an interest rate swap analogy. We also find that we can estimate an individual bank deposit yield curve as a spr...
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
A new formula for sputtering yield as function of ion energies at normal incidence
Grais, Kh. I.; Shaltout, A. A.; Ali, S. S.; Boutros, R. M.; El-behery, K. M.; El-Sayed, Z. A.
2010-04-01
The statistical ellipsoidal construction has been reconstructed into the statistical conicoidal construction, to describe the sputtering yield, at normal incidence, for various ion energies. The most important advantage of the new volume is the developing of a simple-single equation to describe the sputtering-energy relationship. Its parameters have been pictorially predicted from the conicoidal representation. A correction term [1-( E th/E i) 1/ Ω] was added to the present new equation to describe the threshold energy ( E th) of sputtering. The developed equation could be applied to all available ion/target combinations, over a broadened range of energy for low and heavy ion-masses. The new equation has been differentiated with respect to energy giving rise to a relation between the threshold energy and maximum energy, at which the maximum sputtering yield occurs. It was found that, the obtained theoretical sputtering data for low and heavy ions satisfactorily approaches the available experimental data and works well at the threshold regime. It should be mentioned that the conicoidal model is not only of interest for analytical glow discharge method but also for ion beam method for the sputtering process, where low and high sputtering values could occur.
Dale Bruce E
2009-11-01
Full Text Available Abstract Background Corn stover composition changes considerably throughout the growing season and also varies between the various fractions of the plant. These differences can impact optimal pretreatment conditions, enzymatic digestibility and maximum achievable sugar yields in the process of converting lignocellulosics to ethanol. The goal of this project was to determine which combination of corn stover fractions provides the most benefit to the biorefinery in terms of sugar yields and to determine the preferential order in which fractions should be harvested. Ammonia fiber expansion (AFEX pretreatment, followed by enzymatic hydrolysis, was performed on early and late harvest corn stover fractions (stem, leaf, husk and cob. Sugar yields were used to optimize scenarios for the selective harvest of corn stover assuming 70% or 30% collection of the total available stover. Results The optimal AFEX conditions for all stover fractions, regardless of harvest period, were: 1.5 (g NH3 g-1 biomass; 60% moisture content (dry-weight basis; dwb, 90°C and 5 min residence time. Enzymatic hydrolysis was conducted using cellulase, β-glucosidase, and xylanase at 31.3, 41.3, and 3.1 mg g-1 glucan, respectively. The optimal harvest order for selectively harvested corn stover (SHCS was husk > leaf > stem > cob. This harvest scenario, combined with optimal AFEX pretreatment conditions, gave a theoretical ethanol yield of 2051 L ha-1 and 912 L ha-1 for 70% and 30% corn stover collection, respectively. Conclusion Changing the proportion of stover fractions collected had a smaller impact on theoretical ethanol yields (29 - 141 L ha-1 compared to the effect of altering pretreatment and enzymatic hydrolysis conditions (150 - 462 L ha-1 or harvesting less stover (852 - 1139 L ha-1. Resources may be more effectively spent on improving sustainable harvesting, thereby increasing potential ethanol yields per hectare harvested, and optimizing biomass processing rather than
Liu, Jianming; Chan, Siu Hung Joshua; Brock-Nannestad, Theis;
2016-01-01
Biocompatible chemistry is gaining increasing attention because of its potential within biotechnology for expanding the repertoire of biological transformations carried out by enzymes. Here we demonstrate how biocompatible chemistry can be used for synthesizing valuable compounds as well as for l...... of 82%. The diacetyl and S-BDO production rates and yields obtained are the highest ever reported, demonstrating the promising combination of metabolic engineering and biocompatible chemistry as well as the great potential of L. lactis as a new production platform.......M or 8.2g/L) and high yield (87% of the theoretical maximum). Subsequently, the pathway was extended to (S,S)-2,3-butanediol (S-BDO) through efficiently linking two metabolic pathways via chemical catalysis. This resulted in efficient homo-S-BDO production with a titer of 74mM (6.7g/L) S-BDO and a yield...
IDENTIFICATION OF IDEOTYPES BY CANONICAL ANALYSIS IN Panicum maximum
Janaina Azevedo Martuscello
2015-04-01
Full Text Available Grouping of genotypes by canonical variable analysis is an important tool in breeding. It allows the grouping of individuals with similar characteristics that are associated with superior agronomic performance and may indicate the ideal profile of a plant for the region. The objective of the present study was to define, by canonical analysis, the agronomic profile of Panicum maximum plants adapted to the Agreste region. The experiment was conducted in a completely randomized design with 28 treatments, 22 genotypes of Panicum maximum, and cultivars Mombasa, Tanzania, Massai, Milenio, BRS Zuri, and BRS Tamani in triplicate in 4-m² plots. Plots were harvested five times and the following traits were evaluated: plant height; total, leaf, and stem; dead dry matter yields; leaf:stem ratio; leaf percentage; and volumetric density of forage. The analysis of canonical variables was performed based on the phenotypic means of the evaluated traits and on the residual variance and covariance matrix. Genotype PM34 showed higher mean leaf dry matter yield under the conditions of the Agreste of Alagoas (on average 53% higher than cultivars Mombasa, Tanzania, Milenio and Massai. It was possible to summarize the variation observed in eight agronomic characteristics in only two canonical variables accounting for 81.44 % of the data variation. The ideotype plant adapted to the conditions of the Agreste should be tall and present high leaf yield, leaf percentage, and leaf:stem ratio, and intermediate values of volumetric density of forage.
Training Concept, Evolution Time, and the Maximum Entropy Production Principle
Alexey Bezryadin
2016-04-01
Full Text Available The maximum entropy production principle (MEPP is a type of entropy optimization which demands that complex non-equilibrium systems should organize such that the rate of the entropy production is maximized. Our take on this principle is that to prove or disprove the validity of the MEPP and to test the scope of its applicability, it is necessary to conduct experiments in which the entropy produced per unit time is measured with a high precision. Thus we study electric-field-induced self-assembly in suspensions of carbon nanotubes and realize precise measurements of the entropy production rate (EPR. As a strong voltage is applied the suspended nanotubes merge together into a conducting cloud which produces Joule heat and, correspondingly, produces entropy. We introduce two types of EPR, which have qualitatively different significance: global EPR (g-EPR and the entropy production rate of the dissipative cloud itself (DC-EPR. The following results are obtained: (1 As the system reaches the maximum of the DC-EPR, it becomes stable because the applied voltage acts as a stabilizing thermodynamic potential; (2 We discover metastable states characterized by high, near-maximum values of the DC-EPR. Under certain conditions, such efficient entropy-producing regimes can only be achieved if the system is allowed to initially evolve under mildly non-equilibrium conditions, namely at a reduced voltage; (3 Without such a “training” period the system typically is not able to reach the allowed maximum of the DC-EPR if the bias is high; (4 We observe that the DC-EPR maximum is achieved within a time, Te, the evolution time, which scales as a power-law function of the applied voltage; (5 Finally, we present a clear example in which the g-EPR theoretical maximum can never be achieved. Yet, under a wide range of conditions, the system can self-organize and achieve a dissipative regime in which the DC-EPR equals its theoretical maximum.
Particle debonding using different yield criteria
Legarth, Brian Nyvang; Kuroda, Mitsutoshi
2004-01-01
is subjected to a fixed biaxial stress state. Four phenomenological anisotropic yield criteria are considered, namely Hill [Hill, R., 1948. Proc. Roy. Soc. London Ser. A 193, 281-297], Barlat and Lian [Barlat, F., Lian, J., 1989. Int. J. Plasticity 5, 51-66], Barlat et al. [Barlat, F., Lege, D.J., Brem, J.......C., 1991. Int. J. Plasticity 7, 693-712; Barlat, F., et al., 2003. Int. J. Plasticity 19, 1297-1319], or the von Mises isotropic yield surface. Also a non-normality flow rule is adopted in some of the studies. Significant effects of plastic anisotropy are seen on the plane stress cell, due to the initial...... extent and shape of the particular yield function considered. The required overall straining of the cell for debonding initiation is related to the extent of the yield surfaces, since a high yield stress promotes debonding. Additionally, the maximum overall stress level for the cell is lower for the Hill...
Robust Hammerstein Adaptive Filtering under Maximum Correntropy Criterion
Zongze Wu
2015-10-01
Full Text Available The maximum correntropy criterion (MCC has recently been successfully applied to adaptive filtering. Adaptive algorithms under MCC show strong robustness against large outliers. In this work, we apply the MCC criterion to develop a robust Hammerstein adaptive filter. Compared with the traditional Hammerstein adaptive filters, which are usually derived based on the well-known mean square error (MSE criterion, the proposed algorithm can achieve better convergence performance especially in the presence of impulsive non-Gaussian (e.g., α-stable noises. Additionally, some theoretical results concerning the convergence behavior are also obtained. Simulation examples are presented to confirm the superior performance of the new algorithm.
Maximum Likelihood Localization of Radiation Sources with unknown Source Intensity
Baidoo-Williams, Henry E
2016-01-01
In this paper, we consider a novel and robust maximum likelihood approach to localizing radiation sources with unknown statistics of the source signal strength. The result utilizes the smallest number of sensors required theoretically to localize the source. It is shown, that should the source lie in the open convex hull of the sensors, precisely $N+1$ are required in $\\mathbb{R}^N, ~N \\in \\{1,\\cdots,3\\}$. It is further shown that the region of interest, the open convex hull of the sensors, is entirely devoid of false stationary points. An augmented gradient ascent algorithm with random projections should an estimate escape the convex hull is presented.
Exploiting Maximum Parallelism in Loop Using Heterogeneous Computing
ZENG Guosun
2001-01-01
In this paper, we present the defini-tion of maximum loop speedup, which is the metricof parallelism hidden in loop body. We also studythe classes of Do-loop and their dependence as wellas the parallelism they contain. How to exploit suchparallelism under heterogeneous computing environ-ment? The paper proposes several approaches, whichare eliminating serial bottleneck by means of heteroge-neous computing, heterogeneous Do-all-loop schedul-ing, heterogeneous Do-a-cross scheduling. We findthat, not only on theoretical analysis but also on ex-perimental results, these schemes acquire better per-formance than in homogeneous computing.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Blatt, John M
2010-01-01
A classic work by two leading physicists and scientific educators endures as an uncommonly clear and cogent investigation and correlation of key aspects of theoretical nuclear physics. It is probably the most widely adopted book on the subject. The authors approach the subject as ""the theoretical concepts, methods, and considerations which have been devised in order to interpret the experimental material and to advance our ability to predict and control nuclear phenomena.""The present volume does not pretend to cover all aspects of theoretical nuclear physics. Its coverage is restricted to
Theoretical information reuse and integration
Rubin, Stuart
2016-01-01
Information Reuse and Integration addresses the efficient extension and creation of knowledge through the exploitation of Kolmogorov complexity in the extraction and application of domain symmetry. Knowledge, which seems to be novel, can more often than not be recast as the image of a sequence of transformations, which yield symmetric knowledge. When the size of those transformations and/or the length of that sequence of transforms exceeds the size of the image, then that image is said to be novel or random. It may also be that the new knowledge is random in that no such sequence of transforms, which produces it exists, or is at least known. The nine chapters comprising this volume incorporate symmetry, reuse, and integration as overt operational procedures or as operations built into the formal representations of data and operators employed. Either way, the aforementioned theoretical underpinnings of information reuse and integration are supported.
Semi-empirical Calculation for Yield of 240Pu Spontaneous Fission
SHU; Neng-chuan; LIU; Li-le; CHEN; Xiao-song; LIU; Ting-jin; SUN; Zheng-jun; CHEN; Yong-jing; QIAN; Jing
2012-01-01
<正>The spontaneous fission yield has important implication in the nuclear engineering. This work used semi-empirical model to calculate its chain yield, the result shows good agreement with the measured data. There are only 3 sets of measured data, and only too gave the chain yields and cumulative yields, covering 17 chains. It is not enough to satisfy the requirement of users. So it is needed to use theoretical model to calculate the chain yield without measured data.
Maximum-likelihood fits to histograms for improved parameter estimation
Fowler, Joseph W
2013-01-01
Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.
2002-01-01
The proceedings contains 8 papers from the Conference on Theoretical Computer Science. Topics discussed include: query by committee, linear separation and random walks; hardness results for neural network approximation problems; a geometric approach to leveraging weak learners; mind change...
Order-theoretical connectivity
T. A. Richmond
1990-01-01
Full Text Available Order-theoretically connected posets are introduced and applied to create the notion of T-connectivity in ordered topological spaces. As special cases T-connectivity contains classical connectivity, order-connectivity, and link-connectivity.
2002-01-01
The proceedings contains 8 papers from the Conference on Theoretical Computer Science. Topics discussed include: query by committee, linear separation and random walks; hardness results for neural network approximation problems; a geometric approach to leveraging weak learners; mind change...
Correlation Analysis of some Growth, Yield, Yield Components and ...
Keywords: Correlation, Wheat; growth, yield, yield components, grain quality. INTRODUCTION. Wheat ... macaroni, biscuits, cookies, cakes, pasta, noodles and couscous; beer, many .... and 6 WAS which ensured weed free plots. Fertilizer was ...
Theoretical and computational chemistry.
Meuwly, Markus
2010-01-01
Computer-based and theoretical approaches to chemical problems can provide atomistic understanding of complex processes at the molecular level. Examples ranging from rates of ligand-binding reactions in proteins to structural and energetic investigations of diastereomers relevant to organo-catalysis are discussed in the following. They highlight the range of application of theoretical and computational methods to current questions in chemical research.
Theoretical physics and astrophysics
Ginzburg, VL
1979-01-01
The aim of this book is to present, on the one hand various topics in theoretical physics in depth - especially topics related to electrodynamics - and on the other hand to show how these topics find applications in various aspects of astrophysics. The first text on theoretical physics and astrophysical applications, it covers many recent advances including those in X-ray, &ggr;-ray and radio-astronomy, with comprehensive coverage of the literature
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Maximum likelihood estimation for cytogenetic dose-response curves
Frome, E.L; DuFrain, R.J.
1983-10-01
In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Louis de Grange
2010-09-01
Full Text Available Maximum entropy models are often used to describe supply and demand behavior in urban transportation and land use systems. However, they have been criticized for not representing behavioral rules of system agents and because their parameters seems to adjust only to modeler-imposed constraints. In response, it is demonstrated that the solution to the entropy maximization problem with linear constraints is a multinomial logit model whose parameters solve the likelihood maximization problem of this probabilistic model. But this result neither provides a microeconomic interpretation of the entropy maximization problem nor explains the equivalence of these two optimization problems. This work demonstrates that an analysis of the dual of the entropy maximization problem yields two useful alternative explanations of its solution. The first shows that the maximum entropy estimators of the multinomial logit model parameters reproduce rational user behavior, while the second shows that the likelihood maximization problem for multinomial logit models is the dual of the entropy maximization problem.
Paek, Seung Weon; Kang, Jae Hyun; Ha, Naya; Kim, Byung-Moo; Jang, Dae-Hyun; Jeon, Junsu; Kim, DaeWook; Chung, Kun Young; Yu, Sung-eun; Park, Joo Hyun; Bae, SangMin; Song, DongSup; Noh, WooYoung; Kim, YoungDuck; Song, HyunSeok; Choi, HungBok; Kim, Kee Sup; Choi, Kyu-Myung; Choi, Woonhyuk; Jeon, JoongWon; Lee, JinWoo; Kim, Ki-Su; Park, SeongHo; Chung, No-Young; Lee, KangDuck; Hong, YoungKi; Kim, BongSeok
2012-03-01
A set of design for manufacturing (DFM) techniques have been developed and applied to 45nm, 32nm and 28nm logic process technologies. A noble technology combined a number of potential confliction of DFM techniques into a comprehensive solution. These techniques work in three phases for design optimization and one phase for silicon diagnostics. In the DFM prevention phase, foundation IP such as standard cells, IO, and memory and P&R tech file are optimized. In the DFM solution phase, which happens during ECO step, auto fixing of process weak patterns and advanced RC extraction are performed. In the DFM polishing phase, post-layout tuning is done to improve manufacturability. DFM analysis enables prioritization of random and systematic failures. The DFM technique presented in this paper has been silicon-proven with three successful tape-outs in Samsung 32nm processes; about 5% improvement in yield was achieved without any notable side effects. Visual inspection of silicon also confirmed the positive effect of the DFM techniques.
Maximizing ROI with yield management
Neil Snyder
2001-01-01
.... the technology is based on the concept of yield management, which aims to sell the right product to the right customer at the right price and the right time therefore maximizing revenue, or yield...
Shortcomings in wheat yield predictions
Semenov, Mikhail A.; Mitchell, Rowan A. C.; Whitmore, Andrew P.; Hawkesford, Malcolm J.; Parry, Martin A. J.; Shewry, Peter R.
2012-06-01
Predictions of a 40-140% increase in wheat yield by 2050, reported in the UK Climate Change Risk Assessment, are based on a simplistic approach that ignores key factors affecting yields and hence are seriously misleading.
Payoff-monotonic game dynamics and the maximum clique problem.
Pelillo, Marcello; Torsello, Andrea
2006-05-01
Evolutionary game-theoretic models and, in particular, the so-called replicator equations have recently proven to be remarkably effective at approximately solving the maximum clique and related problems. The approach is centered around a classic result from graph theory that formulates the maximum clique problem as a standard (continuous) quadratic program and exploits the dynamical properties of these models, which, under a certain symmetry assumption, possess a Lyapunov function. In this letter, we generalize previous work along these lines in several respects. We introduce a wide family of game-dynamic equations known as payoff-monotonic dynamics, of which replicator dynamics are a special instance, and show that they enjoy precisely the same dynamical properties as standard replicator equations. These properties make any member of this family a potential heuristic for solving standard quadratic programs and, in particular, the maximum clique problem. Extensive simulations, performed on random as well as DIMACS benchmark graphs, show that this class contains dynamics that are considerably faster than and at least as accurate as replicator equations. One problem associated with these models, however, relates to their inability to escape from poor local solutions. To overcome this drawback, we focus on a particular subclass of payoff-monotonic dynamics used to model the evolution of behavior via imitation processes and study the stability of their equilibria when a regularization parameter is allowed to take on negative values. A detailed analysis of these properties suggests a whole class of annealed imitation heuristics for the maximum clique problem, which are based on the idea of varying the parameter during the imitation optimization process in a principled way, so as to avoid unwanted inefficient solutions. Experiments show that the proposed annealing procedure does help to avoid poor local optima by initially driving the dynamics toward promising regions in
Effects of Planting Dates on Yield and Yield Components of Four Cumin (Cuminum cyminum L. Landraces
R Soheyli
2011-02-01
Full Text Available Abstract In order to investigate the effect of fall and winter planting dates on phenological and morphological traits, yield and yield components of four cumin (Cuminum cyminum L. landraces, an experiment was conducted at the Research Farm of Agricultural College of Ferdowsi University of Mashhad as a split plot based on randomized complete block design with three replications in 2005-06 growing season. Four planting dates (11th Nov., 11th Dec., 20th Feb. and 17th Mar. were allocated to main plots and four landraces (Ghayen, Torbat-e-heidariyeh, Sabzevar and Khaf were assigned to sub plots. The results indicated that the effects of planting date, landrace and interaction effect of these two factors on plant height, percent of plant survival after winter, yield components, seed yield, biological yield and harvest index were significant. With respect to plant height, there was no difference between fall (11th Nov. and 11th Dec. and winter (20th Feb. planting dates, while plant height in the fourth planting date (17th Mar. decreased severely. The lowest percent of plant survival was observed in the fall sowing dates, while the third and fourth plantings had no plant mortality, for not exposing to cold conditions. The maximum percent of plant survival belonged to Ghayen and Khaf landraces with 85% and 84% respectively, and Torbat-e-heidariyeh had the lowest percent of plant survival with 59%. The greatest number of umbels per plant, number of seeds per umbel, 1000 seeds weight and seeds weight per plant were achieved in the first planting date. Despite priority of the first planting date in yield components over other planting dates, the greatest seed yield and biological yield observed in the third planting date (20th Feb.. With regard to seed yield and biological yield, Ghayen in the third planting and Torbat-e-heidariyeh in the first planting had the greatest and the lowest yields, respectively. Since the fall and winter planting dates led to
Theoretical investigations on model ternary polypeptides using genetic algorithm-Some new results
Arora, Vinita [Department of Chemistry, University of Delhi, Delhi 110 007 (India); Bakhshi, A.K., E-mail: akbakhshi2000@yahoo.com [Department of Chemistry, University of Delhi, Delhi 110 007 (India)
2011-04-28
Graphical abstract: Model ternary polypeptide chains consisting of glycine, alanine and serine amino acids as repeat units in anti-parallel {beta}-pleated sheet conformation have been theoretically investigated and designed using the genetic algorithm. The optimum solution or the polypeptide chain being searched for using the algorithm is the one having minimum band gap and maximum electronic delocalization in the polypeptide chain. The effects of (i) change of basis set from minimal to double zeta, (ii) change in secondary structure from {beta}-pleated to {alpha}-helical, (iii) presence of solvation shell, and (iv) binding of ions such as H{sup +} and Li{sup +} to the peptide group on the resulting optimum solution as well as on electronic structure and conduction properties of polypeptides have been investigated taking the ab initio Hartree-Fock crystal orbital results as input. The band gap value was also found to decrease in presence of a solvation shell, in presence of cations in the vicinity of the polypeptide chain as well as with the use of an improved basis set. Highlights: {yields} GA has been used for theoretical tailoring of aperiodic ternary polypeptides. {yields} Band gap of polypeptide chain decreases in presence of solvation shell. {yields} Band gap decreases in presence of cations in the vicinity of the chain. {yields} H{sup +} ion acts as a strong electron acceptor than Li{sup +} ion due to smaller size. - Abstract: Using genetic algorithm (GA) model ternary polypeptides containing glycine, alanine and serine in {beta}-pleated conformation have been theoretically investigated. In designing, the criterion to attain the optimum solution at the end of GA run is minimum band gap and maximum delocalization in the polypeptide chain. Ab initio results obtained using Clementi's minimal basis set are used as input. Effects of (i) change of basis set from minimal to double zeta, (ii) change in secondary structure from {beta}-pleated to {alpha
Ortega, E.; Montecinos, R.; Cattin, L.; Díaz, F. R.; del Valle, M. A.; Bernède, J. C.
2017-08-01
The study of new dipolar A-π-D molecules, which have an acceptor (A) and donor (D) charge joined by a conjugate bridge, have been an attention focus in the recent years due their different properties. In the current work, a molecular system has been modified in order to compare the effect on properties, such as quantum yield. Thus, two series were generated (alkyl- and alkoxy-substituted) to determine if molecules with tertiary asymmetric amines change their optical properties and whether quantum yield is affected. The different products have been characterized by several techniques such as UV-Vis spectrophotometry, elemental analysis, NMR, FT-IR, mass spectroscopy and fluorescence spectroscopy. Furthermore, their behavior in eight organic solvents, dichloromethane, tetrahydrofuran, ethyl acetate, 1,4-dioxane, acetone, acetonitrile, dimethylformamide and dimethylsulfoxide were experimentally and theoretically studied. The quantum yields were higher for the alkyl-substituted series. Theoretically, the dihedral angles formed between the tertiary amine and carbonyl group moieties have a correlation with quantum yield values, helping to explain why they are higher in non-polar solvents. Consequently, the maximum quantum yield was obtained with (E)-2-cyano-3-(5-((E)-2-(9,9-diethyl-7-(methyl(phenyl)amino)-9H-fluoren-2-yl) vinyl)thiophen-2-yl)acrylic acid (M8-1) in 1,4-dioxane, reaching 98.8%.
Interactions of climatic factors affecting milk yield and composition
Sharma, A.K.; Rodriguez, L.A.; Wilcox, C.J.; Collider, R.J.; Bachman, K.C.; Martin, F.G.
1988-01-01
Objectives were to evaluate effects of interactions of maximum temperature, minimum relative humidity, and solar radiation on milk yield and constituent traits. Effects of climate variables and their interactions were significant but small in most cases. Second order regression models were developed for several variables. Six were examined in detail: Holstein and Jersey milk yields, Holstein fat and Feulgen-DNA reflectance percent, and Jersey protein percent and yield. Maximum temperature had greatest influence on each response, followed by minimum relative humidity and solar radiation. Optimum conditions for milk production were at maximum temperatures below 19. 4/degree/C, increasing solar radiation, and minimum relative humidity between 33.4 and 78.2% (cool sunny days, moderate humidity). Maximum Holstein fat percent of 3.5% was predicted for maximum temperatures below 30.8/degree/C, minimum relative humidity below 89%, and solar radiation below 109 Langleys; actual mean Holstein fat percent was 3. 35%. Optimum climatic conditions for Jersey protein percent were at maximum temperature of 10.6/degree/C with solar radiation at 300 Langleys and relative humidity at 16% (cool sunny days, low humidity). Because noteworthy interactions existed between climate effects, response surface methodology was suitable for determining optimum climatic conditions for milk production.
Analysis of the trade-off between high crop yield and low yield instability at the global scale
Ben-Ari, Tamara; Makowski, David
2016-10-01
Yield dynamics of major crops species vary remarkably among continents. Worldwide distribution of cropland influences both the expected levels and the interannual variability of global yields. An expansion of cultivated land in the most productive areas could theoretically increase global production, but also increase global yield instability if the most productive regions are characterized by high interannual yield variability. In this letter, we use portfolio analysis to quantify the tradeoff between the expected values and the interannual variance of global yield. We compute optimal frontiers for four crop species i.e., maize, rice, soybean and wheat and show how the distribution of cropland among large world regions can be optimized to either increase expected global crop production or decrease its interannual variability. We also show that a preferential allocation of cropland in the most productive regions can increase global expected yield at the expense of yield stability. Theoretically, optimizing the distribution of a small fraction of total cultivated areas can help find a good compromise between low instability and high crop yields at the global scale.
Maximum energy output of a DFIG wind turbine using an improved MPPT-curve method
Dinh-Chung Phan; Shigeru Yamamoto
2015-01-01
A new method is proposed for obtaining the maximum power output of a doubly-fed induction generator (DFIG) wind turbine to control the rotor- and grid-side converters. The efficiency of maximum power point tracking that is obtained by the proposed method is theoretically guaranteed under assumptions that represent physical conditions. Several control parameters may be adjusted to ensure the quality of control performance. In particular, a DFIG state-space model and a control technique based o...
Efficiency at maximum power for an Otto engine with ideal feedback
Wang, Honghui; He, Jizhou; Wang, Jianhui; Wu, Zhaoqi
2016-10-01
We propose an Otto heat engine that undergoes processes involving a special class of feedback and analyze theoretically its response. We use stochastic thermodynamics to determine the performance characteristics of the heat engine and indicate the possibility that its maximum efficiency can surpass the Carnot value. The analytical expression for efficiency at maximum power, including the effects resulting from feedback, reduces to that previously derived based on an engine without feedback.
Maximum Work of Free-Piston Stirling Engine Generators
Kojima, Shinji
2017-04-01
Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.
Influence of Pareto optimality on the maximum entropy methods
Peddavarapu, Sreehari; Sunil, Gujjalapudi Venkata Sai; Raghuraman, S.
2017-07-01
Galerkin meshfree schemes are emerging as a viable substitute to finite element method to solve partial differential equations for the large deformations as well as crack propagation problems. However, the introduction of Shanon-Jayne's entropy principle in to the scattered data approximation has deviated from the trend of defining the approximation functions, resulting in maximum entropy approximants. Further in addition to this, an objective functional which controls the degree of locality resulted in Local maximum entropy approximants. These are based on information-theoretical Pareto optimality between entropy and degree of locality that are defining the basis functions to the scattered nodes. The degree of locality in turn relies on the choice of locality parameter and prior (weight) function. The proper choices of both plays vital role in attain the desired accuracy. Present work is focused on the choice of locality parameter which defines the degree of locality and priors: Gaussian, Cubic spline and quartic spline functions on the behavior of local maximum entropy approximants.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Sherwan E. Tofiq
2016-06-01
Full Text Available The present study was conducted at Agricultural Research Center of Bakrajo, Sulaimani, Iraq during three successive seasons 2011-2014. This research was conducted using seven faba bean cultivars namely (Zaina, Seher, Yieldiz, Civilla, Luz di Otono, Tanyari and local. The following measurements and observations were made: 100 seed weight, first node height, number of seeds/plant, number of seeds/pod, pod length, number of pods/plant and seed yield. The results indicated that highly significant and negative correlations were presented between 100 seed weight and seed yield, whereas, significant and positive correlations were presented between the numbers of seed/plant and seed yield at the second season. In addition, the results of the third season indicate that the number of seeds/plant correlated significantly and positively with seed yield, and the number of seeds/pod correlated significantly and negatively with seed yield, whereas, number of pods/plant correlated high significantly and positively with the seed yield. The character first node height showed maximum direct effect value in seed yield at the first season and the third season, while number of pods/plant showed maximum direct effect value in seed yield at the second season.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Reflections on theoretical pragmatics
黄衍
2001-01-01
This paper provides a critical survey of theoretical pragmatics in contemporary linguistics. Among the topics that are addressed in the essay include the Anglo-American, and European Continental schools of thought;neo-Gricean pragmatic, and Relevance theories, the pragmatics-semantics interface; and the pragmatics-syntax interface.
Fission yield covariances for JEFF: A Bayesian Monte Carlo method
Leray Olivier
2017-01-01
Full Text Available The JEFF library does not contain fission yield covariances, but simply best estimates and uncertainties. This situation is not unique as all libraries are facing this deficiency, firstly due to the lack of a defined format. An alternative approach is to provide a set of random fission yields, themselves reflecting covariance information. In this work, these random files are obtained combining the information from the JEFF library (fission yields and uncertainties and the theoretical knowledge from the GEF code. Examples of this method are presented for the main actinides together with their impacts on simple burn-up and decay heat calculations.
PTree: pattern-based, stochastic search for maximum parsimony phylogenies
Ivan Gregor
2013-06-01
Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.
Boundary condition effects on maximum groundwater withdrawal in coastal aquifers.
Lu, Chunhui; Chen, Yiming; Luo, Jian
2012-01-01
Prevention of sea water intrusion in coastal aquifers subject to groundwater withdrawal requires optimization of well pumping rates to maximize the water supply while avoiding sea water intrusion. Boundary conditions and the aquifer domain size have significant influences on simulating flow and concentration fields and estimating maximum pumping rates. In this study, an analytical solution is derived based on the potential-flow theory for evaluating maximum groundwater pumping rates in a domain with a constant hydraulic head landward boundary. An empirical correction factor, which was introduced by Pool and Carrera (2011) to account for mixing in the case with a constant recharge rate boundary condition, is found also applicable for the case with a constant hydraulic head boundary condition, and therefore greatly improves the usefulness of the sharp-interface analytical solution. Comparing with the solution for a constant recharge rate boundary, we find that a constant hydraulic head boundary often yields larger estimations of the maximum pumping rate and when the domain size is five times greater than the distance between the well and the coastline, the effect of setting different landward boundary conditions becomes insignificant with a relative difference between two solutions less than 2.5%. These findings can serve as a preliminary guidance for conducting numerical simulations and designing tank-scale laboratory experiments for studying groundwater withdrawal problems in coastal aquifers with minimized boundary condition effects.
Genetic relationship between yield and yield components of maize
Nastasić Aleksandra
2010-01-01
Full Text Available One of the objectives of this paper was to determine relationship between grain yield and yield components, in S1 and HS progenies of one early synthetic maize population. Grain yield was in high significant, medium strong and strong association with all studied yield components, in both populations. The strongest correlation was recorded between grain yield and 1000-kernel weight (S1 progenies rg = 0.684; HS progenies rg = 0.633. Between other studied traits, the highest values of genotypic coefficient of correlations were found between 1000-kernel weight and kernel depth in S1 population, and 1000-kernel weight and ear length in HS population. Also, objective of this research was founding the direct and indirect effects of yield components on grain yield. Desirable, high significant influence on grain yield, in path coefficient analysis, was found for 1000-kernel weight and kernel row number, and in S1 and HS progenies, and for ear length in population of S1 progenies. Kernel depth has undesirable direct effect on grain yield, in both populations.
L. M. Miller
2010-09-01
Full Text Available The availability of wind power for renewable energy extraction is ultimately limited by how much kinetic energy is generated by natural processes within the Earth system and by fundamental limits of how much of the wind power can be extracted. Here we use these considerations to provide a maximum estimate of wind power availability over land. We use three different methods. First, we use simple, established estimates of the energetics of the atmospheric circulation, which yield about 38 TW of wind power available for extraction. Second, we set up a simple momentum balance model to estimate maximum extractability which we then apply to reanalysis climate data, yielding an estimate of 17 TW. Finally, we perform climate model simulations in which we extract different amounts of momentum from the atmospheric boundary layer to obtain a maximum estimate of how much power can be extracted, yielding 36 TW. These three methods consistently yield maximum estimates in the range of 17–38 TW and are notably less than recent estimates that claim abundant wind power availability. Furthermore, we show with the climate model simulations that the climatic effects at maximum wind power extraction are similar in magnitude to those associated with a doubling of atmospheric CO_{2}. We conclude that in order to understand fundamental limits to renewable energy resources, as well as the impacts of their utilization, it is imperative to use a thermodynamic, Earth system perspective, rather than engineering specifications of the latest technology.
Combining experiments and simulations using the maximum entropy principle.
Wouter Boomsma
2014-02-01
Full Text Available A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
Combining experiments and simulations using the maximum entropy principle.
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-02-01
A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
Determination of the yield locus by means of temperature measurement
Banabic, D.; Huetink, J.
2006-01-01
The paper presents a theoretical background of the thermo-graphical method of determining the yield locus. The analytical expression of the temperature variation of the specimen deformed in the elastic state is determined starting from the first law of thermodynamics. The experimental method for det
Measuring of the maximum measurable velocity for dual-frequency laser interferometer
Zhiping Zhang; Zhaogu Cheng; Zhaoyu Qin; Jianqiang Zhu
2007-01-01
There is an increasing demand on the measurable velocity of laser interferometer in manufacturing technologies. The maximum measurable velocity is limited by frequency difference of laser source, optical configuration, and electronics bandwidth. An experimental setup based on free falling movement has been demonstrated to measure the maximum easurable velocity for interferometers. Measurement results show that the maximum measurable velocity is less than its theoretical value. Moreover, the effect of kinds of factors upon the measurement results is analyzed, and the results can offer a reference for industrial applications.
The yielding transition in amorphous solids under oscillatory shear deformation
Leishangthem, Premkumar; Parmar, Anshul D. S.; Sastry, Srikanth
2017-01-01
Amorphous solids are ubiquitous among natural and man-made materials. Often used as structural materials for their attractive mechanical properties, their utility depends critically on their response to applied stresses. Processes underlying such mechanical response, and in particular the yielding behaviour of amorphous solids, are not satisfactorily understood. Although studied extensively, observed yielding behaviour can be gradual and depend significantly on conditions of study, making it difficult to convincingly validate existing theoretical descriptions of a sharp yielding transition. Here we employ oscillatory deformation as a reliable probe of the yielding transition. Through extensive computer simulations for a wide range of system sizes, we demonstrate that cyclically deformed model glasses exhibit a sharply defined yielding transition with characteristics that are independent of preparation history. In contrast to prevailing expectations, the statistics of avalanches reveals no signature of the impending transition, but exhibit dramatic, qualitative, changes in character across the transition. PMID:28248289
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Systematics in delayed neutron yields
Ohsawa, Takaaki [Kinki Univ., Higashi-Osaka, Osaka (Japan). Atomic Energy Research Inst.
1998-03-01
An attempt was made to reproduce the systematic trend observed in the delayed neutron yields for actinides on the basis of the five-Gaussian representation of the fission yield together with available data sets for delayed neutron emission probability. It was found that systematic decrease in DNY for heavier actinides is mainly due to decrease of fission yields of precursors in the lighter side of the light fragment region. (author)
Impacts on potential ethanol and crude protein yield in alfalfa
An alfalfa (Medicago sativa L.) biomass energy production system would produce two products. Leaves would be separated from stems to produce a high protein feed for livestock while stems would be processed to produce ethanol. Therefore, maximum yields of both leaves and stems are essential for profi...
Interdependence of yield and yield components of confectionary sunflower hybrids
Hladni Nada
2011-01-01
Full Text Available The two most important criteria for introducing new confectionary hybrids into production are high seed and protein yield. That is why it is important to find the traits that are measurable, and that at the same time show a strong correlation with seed and protein yield, so that they can be used as a criteria for confectionary hybrid breeding. Results achieved during 2008 at the locations Rimski Šančevi (Region of Vojvodina and Kula (Central Serbia show that the new confectionary hybrids are expressing higher seed yields in comparison to standards (Vranac and Cepko though with a lower seed oil content. A very strong positive correlation was determined between seed yield and seed protein content, kernel content and mass of 1000 seeds. A very strong positive correlation was determined between seed protein content, seed yield and mass of 1000 seeds, with protein yield. This indicates that seed yield, seed protein content and mass of 1000 seeds have a high influence on protein yield. The degree of interdependence between different traits is a sign of direction which is supposed to facilitate better planning of sunflower breeding program.
The maximum intelligible range of the human voice
Boren, Braxton
This dissertation examines the acoustics of the spoken voice at high levels and the maximum number of people that could hear such a voice unamplified in the open air. In particular, it examines an early auditory experiment by Benjamin Franklin which sought to determine the maximum intelligible crowd for the Anglican preacher George Whitefield in the eighteenth century. Using Franklin's description of the experiment and a noise source on Front Street, the geometry and diffraction effects of such a noise source are examined to more precisely pinpoint Franklin's position when Whitefield's voice ceased to be intelligible. Based on historical maps, drawings, and prints, the geometry and material of Market Street is constructed as a computer model which is then used to construct an acoustic cone tracing model. Based on minimal values of the Speech Transmission Index (STI) at Franklin's position, Whitefield's on-axis Sound Pressure Level (SPL) at 1 m is determined, leading to estimates centering around 90 dBA. Recordings are carried out on trained actors and singers to determine their maximum time-averaged SPL at 1 m. This suggests that the greatest average SPL achievable by the human voice is 90-91 dBA, similar to the median estimates for Whitefield's voice. The sites of Whitefield's largest crowds are acoustically modeled based on historical evidence and maps. Based on Whitefield's SPL, the minimal STI value, and the crowd's background noise, this allows a prediction of the minimally intelligible area for each site. These yield maximum crowd estimates of 50,000 under ideal conditions, while crowds of 20,000 to 30,000 seem more reasonable when the crowd was reasonably quiet and Whitefield's voice was near 90 dBA.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Friedrich, Harald
2017-01-01
This expanded and updated well-established textbook contains an advanced presentation of quantum mechanics adapted to the requirements of modern atomic physics. It includes topics of current interest such as semiclassical theory, chaos, atom optics and Bose-Einstein condensation in atomic gases. In order to facilitate the consolidation of the material covered, various problems are included, together with complete solutions. The emphasis on theory enables the reader to appreciate the fundamental assumptions underlying standard theoretical constructs and to embark on independent research projects. The fourth edition of Theoretical Atomic Physics contains an updated treatment of the sections involving scattering theory and near-threshold phenomena manifest in the behaviour of cold atoms (and molecules). Special attention is given to the quantization of weakly bound states just below the continuum threshold and to low-energy scattering and quantum reflection just above. Particular emphasis is laid on the fundamen...
Compendium of theoretical physics
Wachter, Armin
2006-01-01
Mechanics, Electrodynamics, Quantum Mechanics, and Statistical Mechanics and Thermodynamics comprise the canonical undergraduate curriculum of theoretical physics. In Compendium of Theoretical Physics, Armin Wachter and Henning Hoeber offer a concise, rigorous and structured overview that will be invaluable for students preparing for their qualifying examinations, readers needing a supplement to standard textbooks, and research or industrial physicists seeking a bridge between extensive textbooks and formula books. The authors take an axiomatic-deductive approach to each topic, starting the discussion of each theory with its fundamental equations. By subsequently deriving the various physical relationships and laws in logical rather than chronological order, and by using a consistent presentation and notation throughout, they emphasize the connections between the individual theories. The reader’s understanding is then reinforced with exercises, solutions and topic summaries. Unique Features: Every topic is ...
An efficient approximation algorithm for finding a maximum clique using Hopfield network learning.
Wang, Rong Long; Tang, Zheng; Cao, Qi Ping
2003-07-01
In this article, we present a solution to the maximum clique problem using a gradient-ascent learning algorithm of the Hopfield neural network. This method provides a near-optimum parallel algorithm for finding a maximum clique. To do this, we use the Hopfield neural network to generate a near-maximum clique and then modify weights in a gradient-ascent direction to allow the network to escape from the state of near-maximum clique to maximum clique or better. The proposed parallel algorithm is tested on two types of random graphs and some benchmark graphs from the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS). The simulation results show that the proposed learning algorithm can find good solutions in reasonable computation time.
Effect of different intercropping patterns on yield and yield components of dill and fenugreek
Behzad Shokati
2014-12-01
Full Text Available A field experiment was conducted based on randomized complete blocks design (RCBD in three replications during 2011 at the research farm of university of Tabriz, Iran. In this study two medicinal plants, dill (Anethum graveolens L. and fenugreek (Trigonella foenum-graecum intercropped at different additive (1:20, 1:40 and 1:60 and different replacement (1:1, 1:2 and 1:3 series. Results showed that dill plant at additive treatment especially in 1:20 and 1:60 series had maximum plant fresh and dry weights, umbels per plant, 1000 seed weight, seeds per plant, biological yield and harvest index. However, fenugreek plant at replacement treatment especially in 1:3 and 1:2 series had maximum biological yield, pod in main stem, pod in branches, seeds per pod, seed weights and grain yield. Fenugreek as a medicinal, forage and legume crop promote dill grows characters and could be an effective plant in intercropping systems.
hamed javadi
2009-06-01
Full Text Available In order to study the effect of planting dates and nitrogen rates on yield and yield components of black cumin (Nigella sativa L. a field experiment was conducted in spring 2006 in the Azad University of Birjand. The experiment was done as split plot based on compeletely randomized block design with 3 replications. Four planting dates (21 March, 4, 21 April, 5 May were used as main plot and 3 levels of nitrogen (40, 80 and 120 kg/ha were as sub plot. The results showed that the planting dates effect was significant on traits such as plant height, number of main branches, number of follicles per plant, biological yield and grain yield. As, maximum plant height, number of follicles per plant and biological yield were observed in first planting date and maximum number of main branches and grain yield were observed in first and second planting dates. Planting dates had no significant effects on number of follicles in main branches, number of seed per follicles, weight of 1000 seeds and harvest index. Nitrogen rates and interaction between planting dates and nitrogen rates had no significant effect on the traits. According to the results of this experiment 40 kg/ha nitrogen is enough for black cumin. Also, planting dates in 21 March and 4 April were recognised better because of high yield production.
Robustness - theoretical framework
Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.
2010-01-01
More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....
Electrochemical kinetics theoretical aspects
Vetter, Klaus J
1967-01-01
Electrochemical Kinetics: Theoretical Aspects focuses on the processes, methodologies, reactions, and transformations in electrochemical kinetics. The book first offers information on electrochemical thermodynamics and the theory of overvoltage. Topics include equilibrium potentials, concepts and definitions, electrical double layer and electrocapillarity, and charge-transfer, diffusion, and reaction overvoltage. Crystallization overvoltage, total overvoltage, and resistance polarization are also discussed. The text then examines the methods of determining electrochemical reaction mechanisms
Theoretical Delay Time Distributions
Nelemans, Gijs; Bours, Madelon
2012-01-01
We briefly discuss the method of population synthesis to calculate theoretical delay time distributions of type Ia supernova progenitors. We also compare the results of the different research groups and conclude that although one of the main differences in the results for single degenerate progenitors is the retention efficiency with which accreted hydrogen is added to the white dwarf core, this cannot explain all the differences.
Theoretical Delay Time Distributions
Nelemans, Gijs; Toonen, Silvia; Bours, Madelon
2013-01-01
We briefly discuss the method of population synthesis to calculate theoretical delay time distributions of Type Ia supernova progenitors. We also compare the results of different research groups and conclude that, although one of the main differences in the results for single degenerate progenitors is the retention efficiency with which accreted hydrogen is added to the white dwarf core, this alone cannot explain all the differences.
Silicene: Recent theoretical advances
Lew Yan Voon, L. C.
2016-04-14
Silicene is a two-dimensional allotrope of silicon with a puckered hexagonal structure closely related to the structure of graphene and that has been predicted to be stable. To date, it has been successfully grown in solution (functionalized) and on substrates. The goal of this review is to provide a summary of recent theoretical advances in the properties of both free-standing silicene as well as in interaction with molecules and substrates, and of proposed device applications.
MARKETING MIX THEORETICAL ASPECTS
Margarita Išoraitė
2016-01-01
Aim of article is to analyze marketing mix theoretical aspects. The article discusses that marketing mix is one of the main objectives of the marketing mix elements for setting objectives and marketing budget measures. The importance of each element depends not only on the company and its activities, but also on the competition and time. All marketing elements are interrelated and should be seen in the whole of their actions. Some items may have greater importance than others; it depends main...
Robustness - theoretical framework
Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.
2010-01-01
More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....
Theoretical numerical analysis
Wendroff, Burton
1966-01-01
Theoretical Numerical Analysis focuses on the presentation of numerical analysis as a legitimate branch of mathematics. The publication first elaborates on interpolation and quadrature and approximation. Discussions focus on the degree of approximation by polynomials, Chebyshev approximation, orthogonal polynomials and Gaussian quadrature, approximation by interpolation, nonanalytic interpolation and associated quadrature, and Hermite interpolation. The text then ponders on ordinary differential equations and solutions of equations. Topics include iterative methods for nonlinear systems, matri
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Theoretical Developments in SUSY
Shifman, M.
2009-01-01
I am proud that I was personally acquainted with Julius Wess. We first met in 1999 when I was working on the Yuri Golfand Memorial Volume (The Many Faces of the Superworld, World Scientific, Singapore, 2000). I invited him to contribute, and he accepted this invitation with enthusiasm. After that, we met many times, mostly at various conferences in Germany and elsewhere. I was lucky to discuss with Julius questions of theoretical physics, and hear his recollections on how supersymmetry was born. In physics Julius was a visionary, who paved the way to generations of followers. In everyday life he was a kind and modest person, always ready to extend a helping hand to people who were in need of his help. I remember him telling me how concerned he was about the fate of theoretical physicists in Eastern Europe after the demise of communism. His ties with Israeli physicists bore a special character. I am honored by the opportunity to contribute an article to the Julius Wess Memorial Volume. I will review theoretical developments of the recent years in non-perturbative supersymmetry.
Theoretical developments in SUSY
Shifman, M. [University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)
2009-01-15
I am proud that I was personally acquainted with Julius Wess. We first met in 1999 when I was working on the Yuri Golfand Memorial Volume (The Many Faces of the Superworld, World Scientific, Singapore, 2000). I invited him to contribute, and he accepted this invitation with enthusiasm. After that, we met many times, mostly at various conferences in Germany and elsewhere. I was lucky to discuss with Julius questions of theoretical physics, and hear his recollections on how supersymmetry was born. In physics Julius was a visionary, who paved the way to generations of followers. In everyday life he was a kind and modest person, always ready to extend a helping hand to people who were in need of his help. I remember him telling me how concerned he was about the fate of theoretical physicists in Eastern Europe after the demise of communism. His ties with Israeli physicists bore a special character. I am honored by the opportunity to contribute an article to the Julius Wess Memorial Volume. I review theoretical developments of the recent years in non-perturbative supersymmetry. (orig.)
Working Hard and Working Smart: Motivation and Ability during Typical and Maximum Performance
Klehe, Ute-Christine; Anderson, Neil
2007-01-01
The distinction between what people "can" do (maximum performance) and what they "will" do (typical performance) has received considerable theoretical but scant empirical attention in industrial-organizational psychology. This study of 138 participants performing an Internet-search task offers an initial test and verification of P. R. Sackett, S.…
Maximum Entropy Production and Non-Gaussian Climate Variability
Sura, Philip
2016-01-01
Earth's atmosphere is in a state far from thermodynamic equilibrium. For example, the large scale equator-to-pole temperature gradient is maintained by tropical heating, polar cooling, and a midlatitude meridional eddy heat flux predominantly driven by baroclinically unstable weather systems. Based on basic thermodynamic principles, it can be shown that the meridional heat flux, in combination with the meridional temperature gradient, acts to maximize entropy production of the atmosphere. In fact, maximum entropy production (MEP) has been successfully used to explain the observed mean state of the atmosphere and other components of the climate system. However, one important feature of the large scale atmospheric circulation is its often non-Gaussian variability about the mean. This paper presents theoretical and observational evidence that some processes in the midlatitude atmosphere are significantly non-Gaussian to maximize entropy production. First, after introducing the basic theory, it is shown that the ...
Lashkari, Bahman; Mandelis, Andreas
2011-09-01
In this work, a detailed theoretical and experimental comparison between various key parameters of the pulsed and frequency-domain (FD) photoacoustic (PA) imaging modalities is developed. The signal-to-noise ratios (SNRs) of these methods are theoretically calculated in terms of transducer bandwidth, PA signal generation physics, and laser pulse or chirp parameters. Large differences between maximum (peak) SNRs were predicted. However, it is shown that in practice the SNR differences are much smaller. Typical experimental SNRs were 23.2 dB and 26.1 dB for FD-PA and time-domain (TD)-PA peak responses, respectively, from a subsurface black absorber. The SNR of the pulsed PA can be significantly improved with proper high-pass filtering of the signal, which minimizes but does not eliminate baseline oscillations. On the other hand, the SNR of the FD method can be enhanced substantially by increasing laser power and decreasing chirp duration (exposure) correspondingly, so as to remain within the maximum permissible exposure guidelines. The SNR crossover chirp duration is calculated as a function of transducer bandwidth and the conditions yielding higher SNR for the FD mode are established. Furthermore, it was demonstrated that the FD axial resolution is affected by both signal amplitude and limited chirp bandwidth. The axial resolution of the pulse is, in principle, superior due to its larger bandwidth; however, the bipolar shape of the signal is a drawback in this regard. Along with the absence of baseline oscillation in cross-correlation FD-PA, the FD phase signal can be combined with the amplitude signal to yield better axial resolution than pulsed PA, and without artifacts. The contrast of both methods is compared both in depth-wise (delay-time) and fixed delay time images. It was shown that the FD method possesses higher contrast, even after contrast enhancement of the pulsed response through filtering.
Arbutina Bojan
2011-01-01
Full Text Available AM CVn-type stars and ultra-compact X-ray binaries are extremely interesting semi-detached close binary systems in which the Roche lobe filling component is a white dwarf transferring mass to another white dwarf, neutron star or a black hole. Earlier theoretical considerations show that there is a maximum mass ratio of AM CVn-type binary systems (qmax ≈ 2/3 below which the mass transfer is stable. In this paper we derive slightly different value for qmax and more interestingly, by applying the same procedure, we find the maximum expected white dwarf mass in ultra-compact X-ray binaries.
Specific yield: compilation of specific yields for various materials
Johnson, A.I.
1967-01-01
Specific yield is defined as the ratio of (1) the volume of water that a saturated rock or soil will yield by gravity to (2) the total volume of the rock or soft. Specific yield is usually expressed as a percentage. The value is not definitive, because the quantity of water that will drain by gravity depends on variables such as duration of drainage, temperature, mineral composition of the water, and various physical characteristics of the rock or soil under consideration. Values of specific yields nevertheless offer a convenient means by which hydrologists can estimate the water-yielding capacities of earth materials and, as such, are very useful in hydrologic studies. The present report consists mostly of direct or modified quotations from many selected reports that present and evaluate methods for determining specific yield, limitations of those methods, and results of the determinations made on a wide variety of rock and soil materials. Although no particular values are recommended in this report, a table summarizes values of specific yield, and their averages, determined for 10 rock textures. The following is an abstract of the table. [Table
Incorporating phenology into yield models
Gray, J. M.; Friedl, M. A.
2015-12-01
Because the yields of many crops are sensitive to meteorological forcing during specific growth stages, phenological information has potential utility in yield mapping and forecasting exercises. However, most attempts to explain the spatiotemporal variability in crop yields with weather data have relied on growth stage definitions that do not change from year-to-year, even though planting, maturity, and harvesting dates show significant interannual variability. We tested the hypothesis that quantifying temperature exposures over dynamically determined growth stages would better explain observed spatiotemporal variability in crop yields than statically defined time periods. Specifically, we used National Agricultural and Statistics Service (NASS) crop progress data to identify the timing of the start of the maize reproductive growth stage ("silking"), and examined the correlation between county-scale yield anomalies and temperature exposures during either the annual or long-term average silking period. Consistent with our hypothesis and physical understanding, yield anomalies were more correlated with temperature exposures during the actual, rather than the long-term average, silking period. Nevertheless, temperature exposures alone explained a relatively low proportion of the yield variability, indicating that other factors and/or time periods are also important. We next investigated the potential of using remotely sensed land surface phenology instead of NASS progress data to retrieve crop growth stages, but encountered challenges related to crop type mapping and subpixel crop heterogeneity. Here, we discuss the potential of overcoming these challenges and the general utility of remotely sensed land surface phenology in crop yield mapping.
Coiling of yield stress fluids
Y. Rahmani; M. Habibi; A. Javadi; D. Bonn
2011-01-01
We present an experimental investigation of the coiling of a filament of a yield stress fluid falling on a solid surface. We use two kinds of yield stress fluids, shaving foam and hair gel, and show that the coiling of the foam is similar to the coiling of an elastic rope. Two regimes of coiling (el
B Saadatian
2013-04-01
Full Text Available This research in order to study of tolerance ability of wheat cultivates yield and yield components to salinity of irrigation water at sensitive stages of growth, was carried out as a factorial based on a randomized complete block design with 3 replications at greenhouse of Agricultural Faculty of Bu-Ali Sina University, in 2009. Treatments were included wheat cultivars of Alvand, Tous, Sayson and Navid and salinity of irrigation water induced by sodium chloride at five levels of 0, 4, 8, 12 and 16 dS m-1. The results showed that percentage and rate of emergence, plant height, 1000-grain weight, number of seed per spike, number of spike per pot, biological and grain yield reduced by increasing salinity level. At all stress levels Navid cv. had highest emergence percentage. In non-stress and 4 dS m-1, Alvand cv. and at higher levels of stress, Tous cv. had high height in reproductive phase. At control and 4 dS m-1, Sayson cv. and at 8, 12 and 16 dS m-1, Tous cv. in majority of yield and yield components traits had significant superior than other cultivars. Tolerance index of Sayson cv. at 4 and 8 dS m-1 was more than other cultivars but at 12 and 16 dS m-1, maximum value of this index was belonged to Tous cv. At all salinity levels, Alvand cv. had least tolerance index to stress. Number of spike per pot had maximum direct effect on grain yield of wheat cultivars in stress condition. Also indirect effect of biological yield via number of spike per pot than other its indirect effects, had maximum share in wheat seed yield.
Theoretical model of ``fuzz'' growth
Krasheninnikov, Sergei; Smirnov, Roman
2012-10-01
Recent more detailed experiments on tungsten irradiation with low energy helium plasma, relevant to the near-wall plasma conditions in magnetic fusion reactor like ITER, demonstrated (e.g. see Ref. 1) a very dramatic change in both surface morphology and near surface material structure of the samples. In particular, it was shown that a long (mm-scale) and thin (nm-scale) fiber-like structures filled with nano-bubbles, so-called ``fuzz,'' start to grow. In this work theoretical model of ``fuzz'' growth [2] describing the main features observed in experiments is presented. This model, based on the assumption of enhancement of creep of tungsten containing significant fraction of helium atoms and clusters. The results of the MD simulations [3] support this idea and demonstrate a strong reduction of the yield strength for all temperature range. They also show that the ``flow'' of tungsten strongly facilitates coagulation of helium clusters and the formation of nano-bubbles.[4pt] [1] M. J. Baldwin, et al., J. Nucl. Mater. 390-391 (2009) 885;[0pt] [2] S. I. Krasheninnikov, Physica Scripta T145 (2011) 014040;[0pt] [3] R. D. Smirnov and S. I. Krasheninnikov, submitted to J. Nucl. Materials.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Boelt, Birte; Studer, Bruno
2010-01-01
Seed yield is a trait of major interest for many fodder and amenity grass species and has received increasing attention since seed multiplication is economically relevant for novel grass cultivars to compete in the commercial market. Although seed yield is a complex trait and affected...... by agricultural practices as well as environmental factors, traits related to seed production reveal considerable genetic variation, prerequisite for improvement by direct or indirect selection. This chapter first reports on the biological and physiological basics of the grass reproduction system, then highlights...... important aspects and components affecting the seed yield potential and the agronomic and environmental aspects affecting the utilization and realization of the seed yield potential. Finally, it discusses the potential of plant breeding to sustainably improve total seed yield in fodder and amenity grasses....
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.
Theoretical Astrophysics at Fermilab
2004-01-01
The Theoretical Astrophysics Group works on a broad range of topics ranging from string theory to data analysis in the Sloan Digital Sky Survey. The group is motivated by the belief that a deep understanding of fundamental physics is necessary to explain a wide variety of phenomena in the universe. During the three years 2001-2003 of our previous NASA grant, over 120 papers were written; ten of our postdocs went on to faculty positions; and we hosted or organized many workshops and conferences. Kolb and collaborators focused on the early universe, in particular and models and ramifications of the theory of inflation. They also studied models with extra dimensions, new types of dark matter, and the second order effects of super-horizon perturbations. S tebbins, Frieman, Hui, and Dodelson worked on phenomenological cosmology, extracting cosmological constraints from surveys such as the Sloan Digital Sky Survey. They also worked on theoretical topics such as weak lensing, reionization, and dark energy. This work has proved important to a number of experimental groups [including those at Fermilab] planning future observations. In general, the work of the Theoretical Astrophysics Group has served as a catalyst for experimental projects at Fennilab. An example of this is the Joint Dark Energy Mission. Fennilab is now a member of SNAP, and much of the work done here is by people formerly working on the accelerator. We have created an environment where many of these people made transition from physics to astronomy. We also worked on many other topics related to NASA s focus: cosmic rays, dark matter, the Sunyaev-Zel dovich effect, the galaxy distribution in the universe, and the Lyman alpha forest. The group organized and hosted a number of conferences and workshop over the years covered by the grant. Among them were:
YIELD AND YIELD COMPONENTS OF INVESTIGATED RAPESEED HYBRIDS AND CULTIVARS
Milan Pospišil
2014-06-01
Full Text Available To evaluate new winter rapeseed hybrids and cultivars, investigations were conducted at the experimental field of the Faculty of Agriculture, University of Zagreb, in the period 2009/10 - 2011/12. The trial involved 11 hybrids and 5 cultivars rapeseed of 5 seed producers selling seed in Croatia. The studied rapeseed hybrids and cultivars differed significantly in seed and oil yields, oil content and yield components (seed number per silique and 1000 seed weight. However, a number of hybrids rendered identical results, since the differences in the investigated properties were within statistically allowable deviation. Hybrids Traviata and CWH 119 can be singled out based on the achieved seed and oil yields, and the cultivar Ricco and hybrids CWH 119 and PR46W15 for their high oil content in seed. Hybrids with a larger silique number per plant also achieved a higher seed yield.
Maximum MIMO System Mutual Information with Antenna Selection and Interference
Rick S. Blum
2004-05-01
Full Text Available Maximum system mutual information is considered for a group of interfering users employing single user detection and antenna selection of multiple transmit and receive antennas for flat Rayleigh fading channels with independent fading coefficients for each path. In the case considered, the only feedback of channel state information to the transmitter is that required for antenna selection, but channel state information is assumed at the receiver. The focus is on extreme cases with very weak interference or very strong interference. It is shown that the optimum signaling covariance matrix is sometimes different from the standard scaled identity matrix. In fact, this is true even for cases without interference if SNR is sufficiently weak. Further, the scaled identity matrix is actually that covariance matrix that yields worst performance if the interference is sufficiently strong.
Institute for Theoretical Physics
Giddings, S.B.; Ooguri, H.; Peet, A.W.; Schwarz, J.H.
1998-06-01
String theory is the only serious candidate for a unified description of all known fundamental particles and interactions, including gravity, in a single theoretical framework. Over the past two years, activity in this subject has grown rapidly, thanks to dramatic advances in understanding the dynamics of supersymmetric field theories and string theories. The cornerstone of these new developments is the discovery of duality which relates apparently different string theories and transforms difficult strongly coupled problems of one theory into weakly coupled problems of another theory.
Theoretical astrophysics an introduction
Bartelmann, Matthias
2013-01-01
A concise yet comprehensive introduction to the central theoretical concepts of modern astrophysics, presenting hydrodynamics, radiation, and stellar dynamics all in one textbook. Adopting a modular structure, the author illustrates a small number of fundamental physical methods and principles, which are sufficient to describe and understand a wide range of seemingly very diverse astrophysical phenomena and processes. For example, the formulae that define the macroscopic behavior of stellar systems are all derived in the same way from the microscopic distribution function. This function it
Shivamoggi, Bhimsen K
1998-01-01
"Although there are many texts and monographs on fluid dynamics, I do not know of any which is as comprehensive as the present book. It surveys nearly the entire field of classical fluid dynamics in an advanced, compact, and clear manner, and discusses the various conceptual and analytical models of fluid flow." - Foundations of Physics on the first edition. Theoretical Fluid Dynamics functions equally well as a graduate-level text and a professional reference. Steering a middle course between the empiricism of engineering and the abstractions of pure mathematics, the author focuses
Theoretical Optics An Introduction
Römer, Hartmann
2004-01-01
Starting from basic electrodynamics, this volume provides a solid, yet concise introduction to theoretical optics, containing topics such as nonlinear optics, light-matter interaction, and modern topics in quantum optics, including entanglement, cryptography, and quantum computation. The author, with many years of experience in teaching and research, goes way beyond the scope of traditional lectures, enabling readers to keep up with the current state of knowledge. Both content and presentation make it essential reading for graduate and phD students as well as a valuable reference for researche
Theoretical solid state physics
Haug, Albert
2013-01-01
Theoretical Solid State Physics, Volume 1 focuses on the study of solid state physics. The volume first takes a look at the basic concepts and structures of solid state physics, including potential energies of solids, concept and classification of solids, and crystal structure. The book then explains single-electron approximation wherein the methods for calculating energy bands; electron in the field of crystal atoms; laws of motion of the electrons in solids; and electron statistics are discussed. The text describes general forms of solutions and relationships, including collective electron i
Stimulus-dependent maximum entropy models of neural population codes.
Granot-Atedgi, Einat; Tkačik, Gašper; Segev, Ronen; Schneidman, Elad
2013-01-01
Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.
Stimulus-dependent maximum entropy models of neural population codes.
Einat Granot-Atedgi
Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.
On the Threshold of Maximum-Distance Separable Codes
Kindarji, Bruno; Chabanne, Hervé
2010-01-01
Starting from a practical use of Reed-Solomon codes in a cryptographic scheme published in Indocrypt'09, this paper deals with the threshold of linear $q$-ary error-correcting codes. The security of this scheme is based on the intractability of polynomial reconstruction when there is too much noise in the vector. Our approach switches from this paradigm to an Information Theoretical point of view: is there a class of elements that are so far away from the code that the list size is always superpolynomial? Or, dually speaking, is Maximum-Likelihood decoding almost surely impossible? We relate this issue to the decoding threshold of a code, and show that when the minimal distance of the code is high enough, the threshold effect is very sharp. In a second part, we explicit lower-bounds on the threshold of Maximum-Distance Separable codes such as Reed-Solomon codes, and compute the threshold for the toy example that motivates this study.
Interdependence of yield and yield components of confectionary sunflower hybrids
Hladni Nada; Jocić Siniša; Miklič Vladimir; Saftić-Panković Dejana; Kraljević-Balalić Marija
2011-01-01
The two most important criteria for introducing new confectionary hybrids into production are high seed and protein yield. That is why it is important to find the traits that are measurable, and that at the same time show a strong correlation with seed and protein yield, so that they can be used as a criteria for confectionary hybrid breeding. Results achieved during 2008 at the locations Rimski Sancevi (Region of Vojvodina) and Kula (Central Serbia) show t...
Neutrino Mixing: Theoretical Overview
Altarelli, Guido
2013-01-01
We present a concise review of the recent important experimental developments on neutrino mixing (hints for sterile neutrinos, large $\\theta_{13}$, possible non maximal $\\theta_{23}$, approaching sensitivity on $\\delta_{CP}$) and their implications on models of neutrino mixing. The new data disfavour many models but the surviving ones still span a wide range going from Anarchy (no structure, no symmetry in the lepton sector) to a maximum of symmetry, as for the models based on discrete non-abelian flavour groups that can be improved following the indications from the data.
Effects of Nitrogen Rates and Application Method on Grain Yield and Yield
Sh Babazadeh
2012-06-01
Full Text Available Proper application of N fertilizer and its optimization for increasing the economic yield of rice is definitely important. In order to determine the best N application method and amount according to growth stages of hybrid rice, an experiment was carried out at experimental farm of RRII in a factorial experiment based on a randomized complete block design with 3 replications. The treatments included 6 application methods as follow: total nitrogen at transplanting stage ,50% at transplanting stage +50% at early tillering , 50% at transplanting stage +50% at panicle initiation , 50% at transplanting stage +25% at maximum tillering +25% at booting ,34% at transplanting stage + 33% at early tillering + 33% at booting and 70% at transplanting + 30% at panicle initiation. 3 levels of nitrogen (90,120 and 150 kg/ha from urea source were also used. Recorded traits were grain yield and yield component. Results showed the significant interactions between split methods and N rates on yield, flag leaf area, filled and unfilled grain number per panicle and percentage fertility (p
Kiviet, J.F.; Phillips, G.D.A.
2014-01-01
In dynamic regression models conditional maximum likelihood (least-squares) coefficient and variance estimators are biased. Using expansion techniques an approximation is obtained to the bias in variance estimation yielding a bias corrected variance estimator. This is achieved for both the standard
Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong
2012-01-01
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong
2012-01-01
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Chris B. LeDoux; John E. Baumgras; R. Bryan Selbe
1989-01-01
PROFIT-PC is a menu driven, interactive PC (personal computer) program that estimates optimum product mix and maximum net harvesting revenue based on projected product yields and stump-to-mill timber harvesting costs. Required inputs include the number of trees/acre by species and 2 inches diameter at breast-height class, delivered product prices by species and product...
Maximum entropy reconstruction of spin densities involving non uniform prior
Schweizer, J.; Ressouche, E. [DRFMC/SPSMS/MDN CEA-Grenoble (France); Papoular, R.J. [CEA-Saclay, Gif sur Yvette (France). Lab. Leon Brillouin; Tasset, F. [Inst. Laue Langevin, Grenoble (France); Zheludev, A.I. [Brookhaven National Lab., Upton, NY (United States). Physics Dept.
1997-09-01
Diffraction experiments give microscopic information on structures in crystals. A method which uses the concept of maximum of entropy (MaxEnt), appears to be a formidable improvement in the treatment of diffraction data. This method is based on a bayesian approach: among all the maps compatible with the experimental data, it selects that one which has the highest prior (intrinsic) probability. Considering that all the points of the map are equally probable, this probability (flat prior) is expressed via the Boltzman entropy of the distribution. This method has been used for the reconstruction of charge densities from X-ray data, for maps of nuclear densities from unpolarized neutron data as well as for distributions of spin density. The density maps obtained by this method, as compared to those resulting from the usual inverse Fourier transformation, are tremendously improved. In particular, any substantial deviation from the background is really contained in the data, as it costs entropy compared to a map that would ignore such features. However, in most of the cases, before the measurements are performed, some knowledge exists about the distribution which is investigated. It can range from the simple information of the type of scattering electrons to an elaborate theoretical model. In these cases, the uniform prior which considers all the different pixels as equally likely, is too weak a requirement and has to be replaced. In a rigorous bayesian analysis, Skilling has shown that prior knowledge can be encoded into the Maximum Entropy formalism through a model m({rvec r}), via a new definition for the entropy given in this paper. In the absence of any data, the maximum of the entropy functional is reached for {rho}({rvec r}) = m({rvec r}). Any substantial departure from the model, observed in the final map, is really contained in the data as, with the new definition, it costs entropy. This paper presents illustrations of model testing.
Grapevine canopy reflectance and yield
Minden, K. A.; Philipson, W. R.
1982-01-01
Field spectroradiometric and airborne multispectral scanner data were applied in a study of Concord grapevines. Spectroradiometric measurements of 18 experimental vines were collected on three dates during one growing season. Spectral reflectance, determined at 30 intervals from 0.4 to 1.1 microns, was correlated with vine yield, pruning weight, clusters/vine, and nitrogen input. One date of airborne multispectral scanner data (11 channels) was collected over commercial vineyards, and the average radiance values for eight vineyard sections were correlated with the corresponding average yields. Although some correlations were significant, they were inadequate for developing a reliable yield prediction model.
Maximum detection range limitation of pulse laser radar with Geiger-mode avalanche photodiode array
Luo, Hanjun; Xu, Benlian; Xu, Huigang; Chen, Jingbo; Fu, Yadan
2015-05-01
When designing and evaluating the performance of laser radar system, maximum detection range achievable is an essential parameter. The purpose of this paper is to propose a theoretical model of maximum detection range for simulating the Geiger-mode laser radar's ranging performance. Based on the laser radar equation and the requirement of the minimum acceptable detection probability, and assuming the primary electrons triggered by the echo photons obey Poisson statistics, the maximum range theoretical model is established. By using the system design parameters, the influence of five main factors, namely emitted pulse energy, noise, echo position, atmospheric attenuation coefficient, and target reflectivity on the maximum detection range are investigated. The results show that stronger emitted pulse energy, lower noise level, more front echo position in the range gate, higher atmospheric attenuation coefficient, and higher target reflectivity can result in greater maximum detection range. It is also shown that it's important to select the minimum acceptable detection probability, which is equivalent to the system signal-to-noise ratio for producing greater maximum detection range and lower false-alarm probability.
Historical effects of temperature and precipitation on California crop yields
Lobell, D.B. [Energy and Environment Directorate, Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Cahill, K.N. [Interdisciplinary Graduate Program in Environment and Resources, Stanford University, Stanford, CA 94305 (United States); Field, C.B. [Department of Global Ecology, Carnegie Institution, Stanford, CA 94305 (United States)
2007-03-15
For the 1980-2003 period, we analyzed the relationship between crop yield and three climatic variables (minimum temperature, maximum temperature, and precipitation) for 12 major Californian crops: wine grapes, lettuce, almonds, strawberries, table grapes, hay, oranges, cotton, tomatoes, walnuts, avocados, and pistachios. The months and climatic variables of greatest importance to each crop were used to develop regressions relating yield to climatic conditions. For most crops, fairly simple equations using only 2-3 variables explained more than two-thirds of observed yield variance. The types of variables and months identified suggest that relatively poorly understood processes such as crop infection, pollination, and dormancy may be important mechanisms by which climate influences crop yield. Recent climatic trends have had mixed effects on crop yields, with orange and walnut yields aided, avocado yields hurt, and most crops little affected by recent climatic trends. Yield-climate relationships can provide a foundation for forecasting crop production within a year and for projecting the impact of future climate changes.
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Neutron and fission yields from high-energy deuterons in infinite /sup 238/U targets
Canfield, E.
1965-06-28
Early work on the interaction of high energy deuterons with large /sup 238/U targets is reexamined and current theoretical study is discussed. Results of fission and neutron yield calculations are compared with experiment. (SDF)
Approximating the maximum weight clique using replicator dynamics.
Bomze, I R; Pelillo, M; Stix, V
2000-01-01
Given an undirected graph with weights on the vertices, the maximum weight clique problem (MWCP) is to find a subset of mutually adjacent vertices (i.e., a clique) having the largest total weight. This is a generalization of the classical problem of finding the maximum cardinality clique of an unweighted graph, which arises as a special case of the MWCP when all the weights associated to the vertices are equal. The problem is known to be NP-hard for arbitrary graphs and, according to recent theoretical results, so is the problem of approximating it within a constant factor. Although there has recently been much interest around neural-network algorithms for the unweighted maximum clique problem, no effort has been directed so far toward its weighted counterpart. In this paper, we present a parallel, distributed heuristic for approximating the MWCP based on dynamics principles developed and studied in various branches of mathematical biology. The proposed framework centers around a recently introduced continuous characterization of the MWCP which generalizes an earlier remarkable result by Motzkin and Straus. This allows us to formulate the MWCP (a purely combinatorial problem) in terms of a continuous quadratic programming problem. One drawback associated with this formulation, however, is the presence of "spurious" solutions, and we present characterizations of these solutions. To avoid them we introduce a new regularized continuous formulation of the MWCP inspired by previous works on the unweighted problem, and show how this approach completely solves the problem. The continuous formulation of the MWCP naturally maps onto a parallel, distributed computational network whose dynamical behavior is governed by the so-called replicator equations. These are dynamical systems introduced in evolutionary game theory and population genetics to model evolutionary processes on a macroscopic scale.We present theoretical results which guarantee that the solutions provided by
Maximum entropy models of ecosystem functioning
Bertram, Jason, E-mail: jason.bertram@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)
2014-12-05
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Maximum entropy models of ecosystem functioning
Bertram, Jason
2014-12-01
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes' broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Noise and physical limits to maximum resolution of PET images
Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2007-10-01
In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.
Maximum power analysis of photovoltaic module in Ramadi city
Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)
2013-07-01
Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.
Maximum power analysis of photovoltaic module in Ramadi city
Majid Shahatha Salim, Jassim Mohammed Najim, Salih Mohammed Salih
2013-01-01
Full Text Available Performance of photovoltaic (PV module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Multiaxial yield behaviour of polypropylene
Lang R.
2010-06-01
Full Text Available In order to characterize the yield behavior of polypropylene as a function of pressure and to verify the applicability of the Drucker-Prager yield function, various tests were conducted to cover a wide range of stress states from uniaxial tension and compression to multiaxial tension and confined compression. Tests were performed below and above the glass transition temperature, to study the combined effect of pressure and temperature. The pressure sensitivity coefficient as an intrinsic material parameter was determined as a function of temperature. Increasing pressure sensitivity values were found with increasing temperature, which can be related to the change in the free volume and thus, to the enhanced molecular mobility. A best-fit Drucker-Prager yield function was applied to the experimental yield stresses and an average error between the predictions and the measurements of 7 % was obtained.
Effect of biofertilizers on yield and yield components of cucumber
Faranak Moshabaki Isfahani
2012-01-01
Full Text Available Biofertilizer is defined as a substance which contains living organisms which, when applied to seed, plant surface, or soil, colonize the rhizosphere or interior of the plant and promote growth by increasing the supply or availability of primary nutrients to the host plant. Biofertilizers are well recognized as an important component of integrated plant nutrient management for sustainable agriculture and hold a great promise improve crop yield. The present study for the sake of evaluating the use of plant growth promoting rhizobacteria produced by Pseudomonas sp. and phosphate bio fertilizers produced by Pseudomonas putida strain P13 and Pantoea agglomerans strain P5 and chemical fertilizers in the separate treatments on yield and yield components of cucumber by using a factorial experiment in completely randomized block design with three repetition were performed in the field. The symbol of P represents chemical fertilizer by amount of respectively (0, 25%, 50%, 75%, 100%, B1 shows plant growth promoting rhizobacteria (PGPR and B2 indicates bio fertilizer-2. The results showed that P1B0 has the most yield, and control treatments has the least yield. P100B1 has the most length of plant and P100B0 has the least length of plant, P25B1 has the most amount of chlorophyll and P75B2 has the least chlorophyll. P75B2 has the most shoots dry weight and P100B0 has the least shoots dry weight. B1P50 has the most shoots fresh weight and P25B2 has the least shoots fresh weight. B1P50 has the most roots dry weight and P100B0 has the least roots dry weight. B1P50 has the most roots fresh weight and P25B2 has the least roots fresh weight. So the results indicate that use of biological fertilizers have caused increase yield and components yield of cucumber.
Approximate Maximum Likelihood Commercial Bank Loan Management Model
Godwin N.O. Asemota
2009-01-01
Full Text Available Problem statement: Loan management is a very complex and yet, a vitally important aspect of any commercial bank operations. The balance sheet position shows the main sources of funds as deposits and shareholders contributions. Approach: In order to operate profitably, remain solvent and consequently grow, a commercial bank needs to properly manage its excess cash to yield returns in the form of loans. Results: The above are achieved if the bank can honor depositors withdrawals at all times and also grant loans to credible borrowers. This is so because loans are the main portfolios of a commercial bank that yield the highest rate of returns. Commercial banks and the environment in which they operate are dynamic. So, any attempt to model their behavior without including some elements of uncertainty would be less than desirable. The inclusion of uncertainty factor is now possible with the advent of stochastic optimal control theories. Thus, approximate maximum likelihood algorithm with variable forgetting factor was used to model the loan management behavior of a commercial bank in this study. Conclusion: The results showed that uncertainty factor employed in the stochastic modeling, enable us to adaptively control loan demand as well as fluctuating cash balances in the bank. However, this loan model can also visually aid commercial bank managers planning decisions by allowing them to competently determine excess cash and invest this excess cash as loans to earn more assets without jeopardizing public confidence.
Fission yield measurements at IGISOL
Lantz M.
2016-01-01
Full Text Available The fission product yields are an important characteristic of the fission process. In fundamental physics, knowledge of the yield distributions is needed to better understand the fission process. For nuclear energy applications good knowledge of neutroninduced fission-product yields is important for the safe and efficient operation of nuclear power plants. With the Ion Guide Isotope Separator On-Line (IGISOL technique, products of nuclear reactions are stopped in a buffer gas and then extracted and separated by mass. Thanks to the high resolving power of the JYFLTRAP Penning trap, at University of Jyväskylä, fission products can be isobarically separated, making it possible to measure relative independent fission yields. In some cases it is even possible to resolve isomeric states from the ground state, permitting measurements of isomeric yield ratios. So far the reactions U(p,f and Th(p,f have been studied using the IGISOL-JYFLTRAP facility. Recently, a neutron converter target has been developed utilizing the Be(p,xn reaction. We here present the IGISOL-technique for fission yield measurements and some of the results from the measurements on proton induced fission. We also present the development of the neutron converter target, the characterization of the neutron field and the first tests with neutron-induced fission.
Theoretical Particle Astrophysics
Kamionkowski, Marc
2013-08-07
Abstract: Theoretical Particle Astrophysics The research carried out under this grant encompassed work on the early Universe, dark matter, and dark energy. We developed CMB probes for primordial baryon inhomogeneities, primordial non-Gaussianity, cosmic birefringence, gravitational lensing by density perturbations and gravitational waves, and departures from statistical isotropy. We studied the detectability of wiggles in the inflation potential in string-inspired inflation models. We studied novel dark-matter candidates and their phenomenology. This work helped advance the DoE's Cosmic Frontier (and also Energy and Intensity Frontiers) by finding synergies between a variety of different experimental efforts, by developing new searches, science targets, and analyses for existing/forthcoming experiments, and by generating ideas for new next-generation experiments.
Theoretical physics 5 thermodynamics
Nolting, Wolfgang
2017-01-01
This concise textbook offers a clear and comprehensive introduction to thermodynamics, one of the core components of undergraduate physics courses. It follows on naturally from the previous volumes in this series, defining macroscopic variables, such as internal energy, entropy and pressure,together with thermodynamic principles. The first part of the book introduces the laws of thermodynamics and thermodynamic potentials. More complex themes are covered in the second part of the book, which describes phases and phase transitions in depth. Ideally suited to undergraduate students with some grounding in classical mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successful German editions, the eight volumes of this series cove...
Theoretical Molecular Biophysics
Scherer, Philipp
2010-01-01
"Theoretical Molecular Biophysics" is an advanced study book for students, shortly before or after completing undergraduate studies, in physics, chemistry or biology. It provides the tools for an understanding of elementary processes in biology, such as photosynthesis on a molecular level. A basic knowledge in mechanics, electrostatics, quantum theory and statistical physics is desirable. The reader will be exposed to basic concepts in modern biophysics such as entropic forces, phase separation, potentials of mean force, proton and electron transfer, heterogeneous reactions coherent and incoherent energy transfer as well as molecular motors. Basic concepts such as phase transitions of biopolymers, electrostatics, protonation equilibria, ion transport, radiationless transitions as well as energy- and electron transfer are discussed within the frame of simple models.
Social Security: Theoretical Aspects
O. I. Kashnik
2013-01-01
Full Text Available The paper looks at the phenomena of security and social security from the philosophical, sociological and psychological perspective. The undertaken analysis of domestic and foreign scientific materials demonstrates the need for interdisciplinary studies, including pedagogy and education, aimed at developing the guidelines for protecting the social system from destruction. The paper defines the indicators, security level indices and their assessment methods singled out from the analytical reports and security studies by the leading Russian sociological centers and international expert organizations, including the United Nations.The research is aimed at finding out the adequate models of personal and social security control systems at various social levels. The theoretical concepts can be applied by the teachers of the Bases of Life Safety course, the managers and researches developing the assessment criteria and security indices of educational environment evaluation, as well as the methods of diagnostics and expertise of educational establishments from the security standpoint.
Theoretical physics 3 electrodynamics
Nolting, Wolfgang
2016-01-01
This textbook offers a clear and comprehensive introduction to electrodynamics, one of the core components of undergraduate physics courses. It follows on naturally from the previous volumes in this series. The first part of the book describes the interaction of electric charges and magnetic moments by introducing electro- and magnetostatics. The second part of the book establishes deeper understanding of electrodynamics with the Maxwell equations, quasistationary fields and electromagnetic fields. All sections are accompanied by a detailed introduction to the math needed. Ideally suited to undergraduate students with some grounding in classical and analytical mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successful Germa...
Potential Energy Surfaces and Quantum Yields for Photochromic Diarylethene Reactions
Makoto Hatakeyama
2013-05-01
Full Text Available Photochromic diarylethenes (DAEs are among the most promising molecular switching systems for future molecular electronics. Numerous derivatives have been synthesized recently, and experimental quantum yields (QYs have been reported for two categories of them. Although the QY is one of the most important properties in various applications, it is also the most difficult property to predict before a molecule is actually synthesized. We have previously reported preliminary theoretical studies on what determines the QYs in both categories of DAE derivatives. Here, reflecting theoretical analyses of potential energy surfaces and recent experimental results, a rational explanation of the general guiding principle for QY design is presented for future molecular design.
Recent advance on the efficiency at maximum power of heat engines
Tu Zhan-Chun
2012-01-01
This review reports several key advances on the theoretical investigations of efficiency at maximum power of heat engines in the past five years.The analytical results of efficiency at maximum power for the Curzon-Ahlborn heat engine,the stochastic heat engine constructed from a Brownian particle,and Feynman's ratchet as a heat engine are presented.It is found that:the efficiency at maximum power exhibits universal behavior at small relative temperature differences; the lower and the upper bounds might exist under quite general conditions; and the problem of efficiency at maximum power comes down to seeking for the minimum irreversible entropy production in each finite-time isothermal process for a given time.
Asymptotic properties of maximum likelihood estimators in models with multiple change points
He, Heping; 10.3150/09-BEJ232
2011-01-01
Models with multiple change points are used in many fields; however, the theoretical properties of maximum likelihood estimators of such models have received relatively little attention. The goal of this paper is to establish the asymptotic properties of maximum likelihood estimators of the parameters of a multiple change-point model for a general class of models in which the form of the distribution can change from segment to segment and in which, possibly, there are parameters that are common to all segments. Consistency of the maximum likelihood estimators of the change points is established and the rate of convergence is determined; the asymptotic distribution of the maximum likelihood estimators of the parameters of the within-segment distributions is also derived. Since the approach used in single change-point models is not easily extended to multiple change-point models, these results require the introduction of those tools for analyzing the likelihood function in a multiple change-point model.
Probable Maximum Earthquake Magnitudes for the Cascadia Subduction
Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.
2013-12-01
The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Relationships between surface solar radiation and wheat yield in Spain
Hernandez-Barrera, Sara; Rodriguez-Puebla, Concepción
2017-04-01
Here we examine the role of solar radiation to describe wheat-yield variability in Spain. We used Partial Least Square regression to capture the modes of surface solar radiation that drive wheat-yield variability. We will show that surface solar radiation introduces the effects of teleconnection patterns on wheat yield and also it is associated with drought and diurnal temperature range. We highlight the importance of surface solar radiation to obtain models for wheat-yield projections because it could reduce uncertainty with respect to the projections based on temperatures and precipitation variables. In addition, the significance of the model based on surface solar radiation is greater than the previous one based on drought and diurnal temperature range (Hernandez-Barrera et al., 2016). According to our results, the increase of solar radiation over Spain for 21st century could force a wheat-yield decrease (Hernandez-Barrera et al., 2017). Hernandez-Barrera S., Rodríguez-Puebla C. and Challinor A.J. 2016 Effects of diurnal temperature range and drought on wheat yield in Spain. Theoretical and Applied Climatology. DOI: 10.1007/s00704-016-1779-9 Hernandez-Barrera S., Rodríguez-Puebla C. 2017 Wheat yield in Spain and associated solar radiation patterns. International Journal of Climatology. DOI: 10.1002/joc.4975
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Yield statistics of interpolated superoscillations
Katzav, Eytan; Perlsman, Ehud; Schwartz, Moshe
2017-01-01
Yield optimized interpolated superoscillations have been recently introduced as a means for possibly making the use of the phenomenon of superoscillation practical. In this paper we study how good is a superoscillation that is not optimal. Namely, by how much is the yield decreased when the signal departs from the optimal one. We consider two situations. One is the case where the signal strictly obeys the interpolation requirement and the other is when that requirement is relaxed. In the latter case the yield can be increased at the expense of deterioration of signal quality. An important conclusion is that optimizing superoscillations may be challenging in terms of the precision needed, however, storing and using them is not at all that sensitive. This is of great importance in any physical system where noise and error are inevitable.
Theoretical Approaches to Coping
Sofia Zyga
2013-01-01
Full Text Available Introduction: Dealing with stress requires conscious effort, it cannot be perceived as equal to individual's spontaneous reactions. The intentional management of stress must not be confused withdefense mechanisms. Coping differs from adjustment in that the latter is more general, has a broader meaning and includes diverse ways of facing a difficulty.Aim: An exploration of the definition of the term "coping", the function of the coping process as well as its differentiation from other similar meanings through a literature review.Methodology: Three theoretical approaches of coping are introduced; the psychoanalytic approach; approaching by characteristics; and the Lazarus and Folkman interactive model.Results: The strategic methods of the coping approaches are described and the article ends with a review of the approaches including the functioning of the stress-coping process , the classificationtypes of coping strategies in stress-inducing situations and with a criticism of coping approaches.Conclusions: The comparison of coping in different situations is difficult, if not impossible. The coping process is a slow process, so an individual may select one method of coping under one set ofcircumstances and a different strategy at some other time. Such selection of strategies takes place as the situation changes.
Experimental bremsstrahlung yields for MeV proton bombardment of beryllium and carbon
Cohen, David D. [Institute for Environmental Research, Australian Nuclear Science and Technology Organisation, Private Mail Bag 1, Menai, NSW 2234 (Australia)], E-mail: dcz@ansto.gov.au; Stelcer, Eduard; Siegele, Rainer; Ionescu, Mihail; Prior, Michael [Institute for Environmental Research, Australian Nuclear Science and Technology Organisation, Private Mail Bag 1, Menai, NSW 2234 (Australia)
2008-04-15
Experimental bremsstrahlung yields for 2, 3 and 4 MeV protons on thin beryllium and carbon targets have been measured. The yields have been corrected for detector efficiency, self-absorption in the target and fitted to 9th order polynomials over the X-ray energy range 1-10 keV for easy comparison with theoretical calculations.
Theoretical improvements for luminosity monitoring at low energies
Gluza, Janusz; Gunia, Michal [Uniwersytet Slaski, Katowice (Poland). Inst. of Physics and Chemistry of Metals; Riemann, Tord [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Worek, Malgorzata [Bergische Univ., Wuppertal (Germany). Fachbereich Physik
2012-01-15
A comparison of theoretical results on NNLO leptonic and hadronic corrections to Bhabha scattering with the Monte Carlo generator BabaYaga rate at NLO used at meson factories is given. Complete NLO virtual corrections to the e{sup +}e{sup -}{yields}{mu}{sup +}{mu}{sup -}{gamma} process are discussed.
MA Chun-hui; HAN Jian-guo; SUN Jie-feng; ZHANG Quan; LU Guan-jun
2004-01-01
The study was conducted to determine the optimum fertilizer-N application rate for maximum seed yields of Zoysiagrass stands from seeding and transplant respectively,at Jiaozhou,Shandong Province,China from 2001to 2003.In the third year after establishment,seed yields and yield components for both stands showed a similar response to fertilizerN.Maximum fertile tiller numbers(3 342 heads m-2 and 2 941 heads m-2 from stands seeded in rows and transplanted,respectively)and the highest seed yields(844.50 kg ha-1 and 874.65kg ha-1 from stands seeded in rows and transplanted,respectively)were obtained at a N fertilizer rate of 20 kgha-1 in autumn and 10 kg ha-1 in spring(30 kg ha-1 N in total).The fertile tillers and seed yields decreased with further increasing of N fertilizer rate.Fertilizer-N application could increase the length of spike,spikelets per fertile tiller,seed number per spike,setting percentage and thousand seed weight.The 1000-seed weight and the length of spike from transplant plots were higher than that from seeding plots.The optimal harvest time of zoysiagrass at Jiaozhou was on the 36th day after peak anthesis,near June 15 th,when the seed moisture content was 26-28%.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Effect of Seed treatment, Panchagavya application, Growth and yield of Maize
Shubha, S.
2014-01-01
An experiment was conducted to study the effect of seed treatment,Panchagavya application and organic farming systems on growth and yield of Maize. grain yield of maize varied significantly due to different organic farming systems, seed treatment and panchagavya spray.Maximum grain yield of 19.3 q per ha was recorded in organic farming system II and minimum maize grain yield was recorded in system I (17.1q/ha).The grain yield of 19.6 q per ha and 16.90 q per ha was recorded with panchagavya(3...
Maximum entropy, word-frequency, Chinese characters, and multiple meanings.
Yan, Xiaoyong; Minnhagen, Petter
2015-01-01
The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (k(max)). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, k(max)) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, k(max)), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf's law, the Simon-model for texts and the present results are discussed.
Needs to Update Probable Maximum Precipitation for Critical Infrastructure
Pathak, C. S.; England, J. F.
2015-12-01
Probable Maximum Precipitation (PMP) is theoretically the greatest depth of precipitation for a given duration that is physically possible over a given size storm area at a particular geographical location at a certain time of the year. It is used to develop inflow flood hydrographs, known as Probable Maximum Flood (PMF), as design standard for high-risk flood-hazard structures, such as dams and nuclear power plants. PMP estimation methodology was developed in the 1930s and 40s when many dams were constructed in the US. The procedures to estimate PMP were later standardized by the World Meteorological Organization (WMO) in 1973 and revised in 1986.In the US, PMP estimates were published in a series of Hydrometeorological Reports (e.g., HMR55A, HMR57, and HMR58/59) by the National Weather Service since 1950s. In these reports, storm data up to 1980s were used to establish the current PMP estimates. Since that time, we have acquired additional meteorological data for 30 to 40 years, including newly available radar and satellite based precipitation data. These data sets are expected to have improved data quality and availability in both time and space. In addition, significant numbers of extreme storms have occurred and selected numbers of these events were even close to or exceeding the current PMP estimates, in some cases. In the last 50 years, climate science has progressed and scientists have better and improved understanding of atmospheric physics of extreme storms. However, applied research in estimation of PMP has been lagging behind. Alternative methods, such as atmospheric numerical modeling, should be investigated for estimating PMP and associated uncertainties. It would be highly desirable if regional atmospheric numerical models could be utilized in the estimation of PMP and their uncertainties, in addition to methods used to originally develop PMP index maps in the existing hydrometeorological reports.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Specific yield, High Plains aquifer
U.S. Geological Survey, Department of the Interior — This raster data set represents specific-yield ranges in the High Plains aquifer of the United States. The High Plains aquifer underlies 112.6 million acres (176,000...
Assessing potential sustainable wood yield
Robert F. Powers
2001-01-01
Society is making unprecedented demands on world forests to produce and sustain many values. Chief among them is wood supply, and concerns are rising globally about the ability of forests to meet increasing needs. Assessing this is not easy. It requires a basic understanding of the principles governing forest productivity: how wood yield varies with tree and stand...
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Maximum permissible concentrations of uranium in air
Adams, N
1973-01-01
The retention of uranium by bone and kidney has been re-evaluated taking account of recently published data for a man who had been occupationally exposed to natural uranium aerosols and for adults who had ingested uranium at the normal dietary levels. For life-time occupational exposure to uranium aerosols the new retention functions yield a greater retention in bone and a smaller retention in kidney than the earlier ones, which were based on acute intakes of uranium by terminal patients. Hence bone replaces kidney as the critical organ. The (MPC) sub a for uranium 238 on radiological considerations using the current (1959) ICRP lung model for the new retention functions is slightly smaller than for earlier functions but the (MPC) sub a determined by chemical toxicity remains the most restrictive.
Genetic Parameters for Milk ,Fat Yield and Age at First Calving of Chinese Holsteins in Heilongjiang
无
2001-01-01
Genetic parameters for milk,fat yield and age at first calving of Chinese Holsteins in Heilongjiang were evaluated using multiple-trait restricted maximum likelihood procedures with an animal model. Data consisted of records of 2496 Chinese Holsteins first lactation cows collected from 1989 to 2000. The model included 21herd effects, four calving season effects, nine age at first calving effects, 6697 animal effects. (Co)variance components of milk yield ,fat yield and age at first calving were estimated with the software package for variance component estimation(VCE) by an animal model. The heritabilities were 0. 14.0. 21,0. 38 for milk yield ,fat yield and age at first calving ,respectively. ihe estimates of genetic correlation between milk yield and fat yield,age at first calving were 0. 96,-0.29.respectively. The estimate of genetic correlation between fat yield and age at first calving was-0.28.
Distillation time effect on lavender essential oil yield and composition.
Zheljazkov, Valtcho D; Cantrell, Charles L; Astatkie, Tess; Jeliazkova, Ekaterina
2013-01-01
Lavender (Lavandula angustifolia Mill.) is one of the most widely grown essential oil crops in the world. Commercial extraction of lavender oil is done using steam distillation. The objective of this study was to evaluate the effect of the length of the distillation time (DT) on lavender essential oil yield and composition when extracted from dried flowers. Therefore, the following distillation times (DT) were tested in this experiment: 1.5 min, 3 min, 3.75 min, 7.5 min, 15 min, 30 min, 60 min, 90 min, 120 min, 150 min, 180 min, and 240 min. The essential oil yield (range 0.5-6.8%) reached a maximum at 60 min DT. The concentrations of cineole (range 6.4-35%) and fenchol (range 1.7-2.9%) were highest at the 1.5 min DT and decreased with increasing length of the DT. The concentration of camphor (range 6.6-9.2%) reached a maximum at 7.5-15 min DT, while the concentration of linalool acetate (range 15-38%) reached a maximum at 30 min DT. Results suggest that lavender essential oil yield may not increase after 60 min DT. The change in essential oil yield, and the concentrations of cineole, fenchol and linalool acetate as DT changes were modeled very well by the asymptotic nonlinear regression model. DT may be used to modify the chemical profile of lavender oil and to obtain oils with differential chemical profiles from the same lavender flowers. DT must be taken into consideration when citing or comparing reports on lavender essential oil yield and composition.
A maximum power point tracking algorithm for photovoltaic applications
Nelatury, Sudarshan R.; Gray, Robert
2013-05-01
The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.
The Multivariate Watson Distribution: Maximum-Likelihood Estimation and other Aspects
Sra, Suvrit
2011-01-01
This paper studies fundamental aspects of modelling data using multivariate Watson distributions. Although these distributions are natural for modelling axially symmetric data (i.e., unit vectors where $\\pm \\x$ are equivalent), for high-dimensions using them can be difficult. Why so? Largely because for Watson distributions even basic tasks such as maximum-likelihood are numerically challenging. To tackle the numerical difficulties some approximations have been derived---but these are either grossly inaccurate in high-dimensions (\\emph{Directional Statistics}, Mardia & Jupp. 2000) or when reasonably accurate (\\emph{J. Machine Learning Research, W. & C.P., v2}, Bijral \\emph{et al.}, 2007, pp. 35--42), they lack theoretical justification. We derive new approximations to the maximum-likelihood estimates; our approximations are theoretically well-defined, numerically accurate, and easy to compute. We build on our parameter estimation and discuss mixture-modelling with Watson distributions; here we uncover...
Cohen, Andrew [Boston Univ., MA (United States); Schmaltz, Martin [Boston Univ., MA (United States); Katz, Emmanuel [Boston Univ., MA (United States); Rebbi, Claudio [Boston Univ., MA (United States); Glashow, Sheldon [Boston Univ., MA (United States); Brower, Richard [Boston Univ., MA (United States); Pi, So-Young [Boston Univ., MA (United States)
2016-09-30
This award supported a broadly based research effort in theoretical particle physics, including research aimed at uncovering the laws of nature at short (subatomic) and long (cosmological) distances. These theoretical developments apply to experiments in laboratories such as CERN, the facility that operates the Large Hadron Collider outside Geneva, as well as to cosmological investigations done using telescopes and satellites. The results reported here apply to physics beyond the so-called Standard Model of particle physics; physics of high energy collisions such as those observed at the Large Hadron Collider; theoretical and mathematical tools and frameworks for describing the laws of nature at short distances; cosmology and astrophysics; and analytic and computational methods to solve theories of short distance physics. Some specific research accomplishments include + Theories of the electroweak interactions, the forces that give rise to many forms of radioactive decay; + Physics of the recently discovered Higgs boson. + Models and phenomenology of dark matter, the mysterious component of the universe, that has so far been detected only by its gravitational effects. + High energy particles in astrophysics and cosmology. + Algorithmic research and Computational methods for physics of and beyond the Standard Model. + Theory and applications of relativity and its possible limitations. + Topological effects in field theory and cosmology. + Conformally invariant systems and AdS/CFT. This award also supported significant training of students and postdoctoral fellows to lead the research effort in particle theory for the coming decades. These students and fellows worked closely with other members of the group as well as theoretical and experimental colleagues throughout the physics community. Many of the research projects funded by this grant arose in response to recently obtained experimental results in the areas of particle physics and cosmology. We describe a few of
Xuan Guo
2016-01-01
Full Text Available The theoretical formula of the maximum internal forces for circular tunnel lining structure under impact loads of the underground is deduced in this paper. The internal force calculation formula under different equivalent forms of impact pseudostatic loads is obtained. Furthermore, by comparing the theoretical solution with the measured data of the top blasting model test of circular formula under different equivalent forms of impact pseudostatic loads are obtained. Furthermore, by comparing the theoretical solution with the measured data of the top blasting model test of circular tunnel, it is found that the proposed theoretical results accord with the experimental values well. The corresponding equivalent impact pseudostatic triangular load is the most realistic pattern of all test equivalent forms. The equivalent impact pseudostatic load model and maximum solution of the internal force for tunnel lining structure are partially verified.
The Basic Theoretical Framework
Loeb, Abraham
Cosmology is by now a mature experimental science. We are privileged to live at a time when the story of genesis (how the Universe started and developed) can be critically explored by direct observations. Looking deep into the Universe through powerful telescopes, we can see images of the Universe when it was younger because of the finite time it takes light to travel to us from distant sources. Existing data sets include an image of the Universe when it was 0.4 million years old (in the form of the cosmic microwave background), as well as images of individual galaxies when the Universe was older than a billion years. But there is a serious challenge: in between these two epochs was a period when the Universe was dark, stars had not yet formed, and the cosmic microwave background no longer traced the distribution of matter. And this is precisely the most interesting period, when the primordial soup evolved into the rich zoo of objects we now see. The observers are moving ahead along several fronts. The first involves the construction of large infrared telescopes on the ground and in space, that will provide us with new photos of the first galaxies. Current plans include ground-based telescopes which are 24-42 m in diameter, and NASA's successor to the Hubble Space Telescope, called the James Webb Space Telescope. In addition, several observational groups around the globe are constructing radio arrays that will be capable of mapping the three-dimensional distribution of cosmic hydrogen in the infant Universe. These arrays are aiming to detect the long-wavelength (redshifted 21-cm) radio emission from hydrogen atoms. The images from these antenna arrays will reveal how the non-uniform distribution of neutral hydrogen evolved with cosmic time and eventually was extinguished by the ultra-violet radiation from the first galaxies. Theoretical research has focused in recent years on predicting the expected signals for the above instruments and motivating these ambitious
Genetic progress in Dutch crop yields
Rijk, H.C.A.; Ittersum, van M.K.; Withagen, J.C.M.
2013-01-01
Crop yields are a result of interactions between genetics, environment and management (G × E × M). As in the Netherlands differences between potential yield and actual farm yields (yield gaps) are relatively small, progress in genetic potential is essential to further increase farm yields. In this p
Theoretically Optimal Distributed Anomaly Detection
National Aeronautics and Space Administration — A novel general framework for distributed anomaly detection with theoretical performance guarantees is proposed. Our algorithmic approach combines existing anomaly...
Experimental and theoretical evaluation of accelerator based epithermal neutron yields for BNCT
Wielopolski, L.; Ludewig, H.; Powell, J. R.; Raparia, D.; Alessi, J. G.; Alburger, D. E.; Zucker, M. S.; Lowenstein, D. I.
1999-06-01
At BNL, we have evaluated the beam current required to produce a clinical neutron beam for Boron Neutron Capture Therapy (BNCT) with an epithermal neutron flux of 1012n/cm2/hr. Experiments were carried out on a Van de Graaff accelerator at the Radiological Research Accelerator Facility (RARAF) at Columbia University. A thick Li target was irradiated by protons with energies from 1.8 to 2.5 MeV. The neutron spectra resulting from the 7Li(p,n)7Be reaction, followed by various filter configurations, were determined by measuring pulse height distributions with a gas filled proton recoil spectrometer. These distributions were unfolded into neutron energy spectra using the PSNS code, from which the required beam currents were estimated to be about 5 mA. Results are in good agreement with calculations using the MCNP-4A transport code. In addition comparison was also made between the neutron flux obtained at the Brookhaven Medical Research Reactor (where clinical trials of BNCT are ongoing), and measurements at RARAF, using a 10BF3 detector in a phantom. These results also support the requirement for about 5 mA beam current.
PHOTOSYNTHESIS AND YIELDS OF GRASSES GROWN IN SALINE CONDITION
E.D. Purbajanti
2014-10-01
Full Text Available The aim of this study was to know effects of saline condition to crop physiology, growth andforages yield. A factorial completed random design was used in this study. The first factor was type ofgrass, these were king grass (Pennisetum hybrid, napier grass (Pennisetum purpureum, panicum grass(Panicum maximum, setaria grass (Setaria sphacelata and star grass (Cynodon plectostachyus. Thesecond factor was salt solution (NaCl with concentration 0, 100, 200 and 300 mM. Parameters of thisexperiment were the percentage of chlorophyll, rate of photosynthesis, number of tiller, biomass and drymatter yield. Data were analyzed by analysis of variance and followed by Duncan’s multiple range testwhen there were significant effects of the treatment. Panicum grass had the highest chlorophyll content(1.85 mg/g of leaf. Photosynthesis rate of setaria grass was the lowest. The increasing of NaClconcentration up to 300 mM NaCl reduced chlorophyll content, rate of photosynthesis, tiller number,biomass yield and dry matter yield. Responses of leaf area, biomass and dry matter yield to salinitywere linear for king, napier, panicum and setaria grasses. In tar grass, the response of leaf area andbiomass ware linear, but those of dry matter yield was quadratic. The response of tiller number tosalinity was linear for all species.
Application of Artificial Neural Networks in Canola Crop Yield Prediction
S. J. Sajadi
2014-02-01
Full Text Available Crop yield prediction has an important role in agricultural policies such as specification of the crop price. Crop yield prediction researches have been based on regression analysis. In this research canola yield was predicted using Artificial Neural Networks (ANN using 11 crop year climate data (1998-2009 in Gonbad-e-Kavoos region of Golestan province. ANN inputs were mean weekly rainfall, mean weekly temperature, mean weekly relative humidity and mean weekly sun shine hours and ANN output was canola yield (kg/ha. Multi-Layer Perceptron networks (MLP with Levenberg-Marquardt backpropagation learning algorithm was used for crop yield prediction and Root Mean Square Error (RMSE and square of the Correlation Coefficient (R2 criterions were used to evaluate the performance of the ANN. The obtained results show that the 13-20-1 network has the lowest RMSE equal to 101.235 and maximum value of R2 equal to 0.997 and is suitable for predicting canola yield with climate factors.
Physics-based estimates of maximum magnitude of induced earthquakes
Ampuero, Jean-Paul; Galis, Martin; Mai, P. Martin
2016-04-01
In this study, we present new findings when integrating earthquake physics and rupture dynamics into estimates of maximum magnitude of induced seismicity (Mmax). Existing empirical relations for Mmax lack a physics-based relation between earthquake size and the characteristics of the triggering stress perturbation. To fill this gap, we extend our recent work on the nucleation and arrest of dynamic ruptures derived from fracture mechanics theory. There, we derived theoretical relations between the area and overstress of overstressed asperity and the ability of ruptures to either stop spontaneously (sub-critical ruptures) or runaway (super-critical ruptures). These relations were verified by comparison with simulation and laboratory results, namely 3D dynamic rupture simulations on faults governed by slip-weakening friction, and laboratory experiments of frictional sliding nucleated by localized stresses. Here, we apply and extend these results to situations that are representative for the induced seismicity environment. We present physics-based predictions of Mmax on a fault intersecting cylindrical reservoir. We investigate Mmax dependence on pore-pressure variations (by varying reservoir parameters), frictional parameters and stress conditions of the fault. We also derive Mmax as a function of injected volume. Our approach provides results that are consistent with observations but suggests different scaling with injected volume than that of empirical relation by McGarr, 2014.
Feedback Limits to Maximum Seed Masses of Black Holes
Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea
2017-02-01
The most massive black holes observed in the universe weigh up to ∼1010 M ⊙, nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds (M • ≳ 104 M ⊙) hosted in small isolated halos (M h ≲ 109 M ⊙) accreting with relatively small radiative efficiencies (ɛ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M •–σ relation observed at z ∼ 0 cannot be established in isolated halos at high-z, but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 104–6 M ⊙, we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.
S. Gh Moosavi
2015-07-01
Full Text Available In order to study the effect of N fertilizer rates on morphological traits, yield and yield components of rice cultivars, a study was carried out in Rice Research Institute of Rasht, Iran during 2009. It was a two-variable factorial experiment based on a randomized complete block design with three replications. The first factor was nitrogen fertilization at four rates of 0, 30, 60 and 90 kg N ha-1 and the second factor was rice cultivar at three levels of Hashemi, Ali-Kazemi and Khazar. The results of analysis of variance showed that N fertilizer rates did not significantly affect on panicle length, grain number per panicle, 1000- grain weight and harvest index but significantly affected plant height, tiller number per m2, panicle number per m2, grain yield and biological yield. Means comparison showed that as N rate was increased from 0 to 90 kg ha-1, plant height, tiller number per m2, panicle number per m2, grain yield and biological yield increased by 12.7, 27.6, 32.6, 84.5 and 61.6%, respectively. The cultivar significantly affected morphological traits, panicle number per m2, grain number per panicle, 1000-grain weight, grain yield and biological yield. The results indicated that cultivar of Khazar had the highest potential of grain yield (3424.5 kg ha-1. In total, application of 90 kg N ha-1 and cultivar of Khazar treatment was better for having the maximum production under the conditions of the current study.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Experimental and theoretical study on hollow-cone spray
Chang, Keh-Chin; Wang, Muh-Rong; Wu, Wen-Jing; Hong, Chia-Hong
1993-02-01
A theoretical and experimental investigation has been conducted to study the two-phase turbulent structure in an isothermal hollow-cone spray. Mean and fluctuating velocity components, drop number density, as well as drop-size distribution were measured with a nonintrusive diagnostic tool, a two-component phase Doppler particle analyzer. Complete initial conditions required for theoretical calculations were also provided with measurements. Theoretical calculations were made with an Eulerian-Lagrangian formulism. Turbulent dispersion effects were numerically simulated using a Monte Carlo method. Turbulence modulation effects were also taken into account in the modeling. The well-defined experimental data were used to assess the accuracy of the resultant Eulerian-Lagrangian model. Comparisons showed that the theoretical predictions, based upon the Eulerian-Lagrangian model, yielded reasonable agreement with the experimental data. The improvements made by inclusion of the selected turbulence modulation model were insignificant in this work.
Determining the Tsallis parameter via maximum entropy
Conroy, J. M.; Miller, H. G.
2015-05-01
The nonextensive entropic measure proposed by Tsallis [C. Tsallis, J. Stat. Phys. 52, 479 (1988), 10.1007/BF01016429] introduces a parameter, q , which is not defined but rather must be determined. The value of q is typically determined from a piece of data and then fixed over the range of interest. On the other hand, from a phenomenological viewpoint, there are instances in which q cannot be treated as a constant. We present two distinct approaches for determining q depending on the form of the equations of constraint for the particular system. In the first case the equations of constraint for the operator O ̂ can be written as Tr (FqO ̂)=C , where C may be an explicit function of the distribution function F . We show that in this case one can solve an equivalent maxent problem which yields q as a function of the corresponding Lagrange multiplier. As an illustration the exact solution of the static generalized Fokker-Planck equation (GFPE) is obtained from maxent with the Tsallis enropy. As in the case where C is a constant, if q is treated as a variable within the maxent framework the entropic measure is maximized trivially for all values of q . Therefore q must be determined from existing data. In the second case an additional equation of constraint exists which cannot be brought into the above form. In this case the additional equation of constraint may be used to determine the fixed value of q .
A. V. Khohlov
2016-01-01
Full Text Available The article analyses a one-dimensional linear integral constitutive equation of viscoelasticity with an arbitrary creep compliance function in order to reveal its abilities to describe the set of basic rheological phenomena pertaining to viscoelastoplastic materials at a constant temperature. General equations and basic properties of its quasi-static theoretic curves (i.e. stress-strain curves at constant strain or stress rates, creep, creep recovery, creep curves at piecewise-constant stress and ramp relaxation curves generated by the linear constitutive equation are derived and studied analytically. Their dependences on a creep function and relaxation modulus and on the loading program parameters are examined.The qualitative properties of the theoretic curves are compared to the typical properties of viscoelastoplastic materials test curves to reveal the mechanical effects, which the linear viscoelasticity theory cannot simulate and to find out convenient experimental indicators marking the field of its applicability or non-applicability. The minimal set of general restrictions that should be imposed on a creep and relaxation functions to provide an adequate description of typical test curves of viscoelastoplastic materials is formulated. It is proved, in particular, that an adequate simulation of typical experimental creep recovery curves requires that the derivative of a creep function should not increase at any point. This restriction implies that the linear viscoelasticity theory yields theoretical creep curves with non-increasing creep rate only and it cannot simulate materials demonstrating an accelerated creep stage. It is also proved that the linear viscoelasticity cannot simulate materials with experimental stress-strain curves possessing a maximum point or concave-up segment and materials exhibiting equilibrium modulus dependence on the strain rate or negative rate sensitivity.Similar qualitative analysis seems to be an important
Stuart Wilkinson
2016-06-01
Full Text Available Enzyme saccharification of pretreated brewers spent grains (BSG was investigated, aiming at maximising glucose production. Factors investigated were; variation of the solids loadings at different cellulolytic enzyme doses, reaction time, higher energy mixing methods, supplementation of the cellulolytic enzymes with additional enzymes (and cofactors and use of fed-batch methods. Improved slurry agitation through aerated high-torque mixing offered small but significant enhancements in glucose yields (to 53 ± 2.9 g/L and 45% of theoretical yield compared to only 41 ± 4.0 g/L and 39% of theoretical yield for standard shaking methods (at 15% w/v solids loading. Supplementation of the cellulolytic enzymes with additional enzymes (acetyl xylan esterases, ferulic acid esterases and α-L- arabinofuranosidases also boosted achieved glucose yields to 58 – 69 ± 0.8 - 6.2 g/L which equated to 52 - 58% of theoretical yield. Fed-batch methods also enhanced glucose yields (to 58 ± 2.2 g/L and 35% of theoretical yield at 25% w/v solids loading compared to non-fed-batch methods. From these investigations a novel enzymatic saccharification method was developed (using enhanced mixing, a fed-batch approach and additional carbohydrate degrading enzymes which further increased glucose yields to 78 ± 4.1 g/L and 43% of theoretical yield when operating at high solids loading (25% w/v.
Maximum efficiency of state-space models of nanoscale energy conversion devices.
Einax, Mario; Nitzan, Abraham
2016-07-07
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Maximum efficiency of state-space models of nanoscale energy conversion devices
Einax, Mario; Nitzan, Abraham
2016-07-01
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Potential role of motion for enhancing maximum output energy of triboelectric nanogenerator
Byun, Kyung-Eun; Lee, Min-Hyun; Cho, Yeonchoo; Nam, Seung-Geol; Shin, Hyeon-Jin; Park, Seongjun
2017-07-01
Although triboelectric nanogenerator (TENG) has been explored as one of the possible candidates for the auxiliary power source of portable and wearable devices, the output energy of a TENG is still insufficient to charge the devices with daily motion. Moreover, the fundamental aspects of the maximum possible energy of a TENG related with human motion are not understood systematically. Here, we confirmed the possibility of charging commercialized portable and wearable devices such as smart phones and smart watches by utilizing the mechanical energy generated by human motion. We confirmed by theoretical extraction that the maximum possible energy is related with specific form factors of a TENG. Furthermore, we experimentally demonstrated the effect of human motion in an aspect of the kinetic energy and impulse using varying velocity and elasticity, and clarified how to improve the maximum possible energy of a TENG. This study gives insight into design of a TENG to obtain a large amount of energy in a limited space.
Maximum Likelihood Inference for the Cox Regression Model with Applications to Missing Covariates.
Chen, Ming-Hui; Ibrahim, Joseph G; Shao, Qi-Man
2009-10-01
In this paper, we carry out an in-depth theoretical investigation for existence of maximum likelihood estimates for the Cox model (Cox, 1972, 1975) both in the full data setting as well as in the presence of missing covariate data. The main motivation for this work arises from missing data problems, where models can easily become difficult to estimate with certain missing data configurations or large missing data fractions. We establish necessary and sufficient conditions for existence of the maximum partial likelihood estimate (MPLE) for completely observed data (i.e., no missing data) settings as well as sufficient conditions for existence of the maximum likelihood estimate (MLE) for survival data with missing covariates via a profile likelihood method. Several theorems are given to establish these conditions. A real dataset from a cancer clinical trial is presented to further illustrate the proposed methodology.
Systematic measurement of maximum efficiencies and detuning lengths at the JAERI free-electron laser
Nishimori, N; Nagai, R; Minehara, E J
2002-01-01
We made a systematic measurement of efficiency detuning curves at several gain and loss parameters. The absolute detuning length (delta L) of an optical cavity was measured within an accuracy of 0.1 mu m around the maximum efficiency by a pulse-stacking method using an external laser. The FEL gain was controlled by the undulator gap instead of bunch charge, because we can change the gain rapidly while maintaining constant electron bunch conditions. For the high-gain and low-loss regions, the maximum efficiency is obtained at delta L=0 mu m and is larger than the value derived from the theoretical scaling law in the superradiant regime, while for the low-gain region the maximum efficiency is obtained for delta L shorter than 0 mu m and is similar to the scaling law.
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
OPTIMAL FEED STRATEGY FOR FED-BATCH GLYCEROL FERMENTATION DETERMINED BY MAXIMUM PRINCIPLE
无
2000-01-01
1 IntroductionGlycerol fed-batch fermentation is attractive tocommercial application since it can control theglucose concentration by changing the feed rate andget a high glycerol yield, therefore it is essential todevelop an optimal glucose feed strategy. For mostof fed-batch fermentation, optimization of feed ratewas based on Pontryagin's maximum principle [if.Since the term of feed rate appears linearly in theHamiltonian, the optimal feed rate profile usuallyconsists of ba,lg-bang intervals and singular ...
Maximum Langmuir Fields in Planetary Foreshocks Determined from the Electrostatic Decay Threshold
Robinson, P. A.; Cairns, Iver H.
1995-01-01
Maximum electric fields of Langmuir waves at planetary foreshocks are estimated from the threshold for electrostatic decay, assuming it saturates beam driven growth, and incorporating heliospheric variation of plasma density and temperature. Comparisons with spacecraft observations yields good quantitative agreement. Observations in type 3 radio sources are also in accord with this interpretation. A single mechanism can thus account for the highest fields of beam driven waves in both contexts.
A polynomial algorithm for abstract maximum flow
McCormick, S.T. [Univ. of British Columbia, Vancouver, British Columbia (Canada)
1996-12-31
Ford and Fulkerson`s original 1956 max flow/min cut paper formulated max flow in terms of flows on paths, rather than the more familiar flows on arcs. In 1974 Hoffman pointed out that Ford and Fulkerson`s original proof was quite abstract, and applied to a wide range of max flow-like problems. In this abstract model we have capacitated elements, and linearly ordered subsets of elements called paths. When two paths share an element ({open_quote}cross{close_quote}), then there must be a path that is a subset of the first path up to the cross, and a subset of the second path after the cross. (Hoffman`s generalization of) Ford and Fulkerson`s proof showed that the max flow/min cut theorem still holds under this weak assumption. However, this proof is non-constructive. To get an algorithm, we assume that we have an oracle whose input is an arbitrary subset of elements, and whose output is either a path contained in that subset, or the statement that no such path exists. We then use complementary slackness to show how to augment any feasible set of path flows to a set with a strictly larger total flow value using a polynomial number of calls to the oracle. Then standard scaling techniques yield an overall polynomial algorithm for finding both a max flow and a min cut. Hoffman`s paper actually considers a sort of supermodular objective on the path flows, which allows him to include transportation problems and thus rain-cost flow in his frame-work. We also discuss extending our algorithm to this more general case.
Effect of radicals combination on acetylene yield in process of coal pyrolysis by hydrogen plasma
Dai, B.; Fan, Y.; Yang, J.; Xiao, J. [Tsinghua University, Beijing (China). Dept. of Engineering Mechanics
1999-07-01
A new process for production of acetylene by pyrolysis of coal in hydrogen plasma overcomes the disadvantage of discontinuity and pollution in the conventional carbide method. Complex homogeneous reactions take place after pulverized coal is injected into a high-temperature plasma reactor. In order to preserve C{sub 2}H{sub 2} in low-temperature gas, quenching is needed to avoid the dissociation of acetylene. The objective of this paper is to indicate that radicals recombination is also important in acetylene production. Therefore the quenching process should be optimized to obtain high yield of acetylene. In this work, C-H equilibrium system in high-temperature range of 2000-5000 K is obtained using the free energy minimization method. At lower temperature, the decomposition of acetylene can be avoided while the recombination reaction of radicals C{sub 2}H and H will not be interrupted. As a result, the acetylene concentration in quenched gas will increase. The theoretical acetylene content in quenched gas is computed using the radical recombination mechanism based on the composition of thermal equilibrium, and the optimized C/H ratio is determined simultaneously. The maximum acetylene content is 59.9% in volume. 4 refs., 3 figs., 1 tab.
L. M. Miller
2011-02-01
Full Text Available The availability of wind power for renewable energy extraction is ultimately limited by how much kinetic energy is generated by natural processes within the Earth system and by fundamental limits of how much of the wind power can be extracted. Here we use these considerations to provide a maximum estimate of wind power availability over land. We use several different methods. First, we outline the processes associated with wind power generation and extraction with a simple power transfer hierarchy based on the assumption that available wind power will not geographically vary with increased extraction for an estimate of 68 TW. Second, we set up a simple momentum balance model to estimate maximum extractability which we then apply to reanalysis climate data, yielding an estimate of 21 TW. Third, we perform general circulation model simulations in which we extract different amounts of momentum from the atmospheric boundary layer to obtain a maximum estimate of how much power can be extracted, yielding 18–34 TW. These three methods consistently yield maximum estimates in the range of 18–68 TW and are notably less than recent estimates that claim abundant wind power availability. Furthermore, we show with the general circulation model simulations that some climatic effects at maximum wind power extraction are similar in magnitude to those associated with a doubling of atmospheric CO_{2}. We conclude that in order to understand fundamental limits to renewable energy resources, as well as the impacts of their utilization, it is imperative to use a "top-down" thermodynamic Earth system perspective, rather than the more common "bottom-up" engineering approach.
Piezoelectricity in quasicrystals: A group-theoretical study
K Rama Rao; P Hemagiri Rao; B S K Chaitanya
2007-03-01
Group-theoretical methods have been accepted as exact and reliable tools in studying the physical properties of crystals and quasicrystalline materials. By group representation theory, the maximum number of non-vanishing and independent second- order piezoelectric coefficients required by the seven pentagonal and two icosahedral point groups - that describe the quasicrystal symmetry groups in two and three dimensions - is determined. The schemes of non-vanishing and independent second-order piezoelectric tensor components needed by the nine point groups with five-fold rotations are identified and tabulated employing a compact notation. The results of this group-theoretical study are briefly discussed.
Jung, Young Hoon; Park, Hyun Min; Kim, Dong Hyun; Yang, Jungwoo; Kim, Kyoung Heon
2017-01-11
To reduce the distillation costs of cellulosic ethanol, it is necessary to produce high sugar titers in the enzymatic saccharification step. To obtain high sugar titers, high biomass loadings of lignocellulose are necessary. In this study, to overcome the low saccharification yields and the low operability of high biomass loadings, a fed-batch saccharification process was developed using an enzyme reactor that was designed and built in-house. After optimizing the cellulase and biomass feeding profiles and the agitation speed, 132.6 g/L glucose and 76.0% theoretical maximum glucose were obtained from the 60 h saccharification of maleic acid-pretreated rice straw at a 30% (w/v) solids loading with 15 filter paper units (FPU) of Cellic CTec2/g glucan. This study demonstrated that through the proper optimization of fed-batch saccharification, both high sugar titers and high saccharification yields are possible, even with using the high solids loading (i.e., ≥30%) with the moderate enzyme loading (i.e., high solids saccharification process in cellulosic fuel and chemical production.
Carbon Coatings with Low Secondary Electron Yield
Taborelli, M; Costa Pinto, P; Calatroni, S; Chiggiato, P; Edwards, P; Letant-Delrieux, D; Lucas, S; Neupert, H; Vollenberg, W; Yin-Vallgren, C
2013-01-01
Carbon thin films for electron cloud mitigation and anti-multipacting applications have been prepared by dc magnetron sputtering in both neon and argon discharge gases and by plasma enhanced chemical vapour deposition (PECVD) using acetylene. The thin films have been characterized using Secondary Electron Yield (SEY) measurements, Scanning Electron Microscopy (SEM), Nuclear Reaction Analysis (NRA) and X-ray Photoelectron Spectroscopy (XPS). For more than 100 carbon thin films prepared by sputtering the average maximum SEY is 0.98+/-0.07 after air transfer. The density of the films is lower than the density of Highly Ordered Pyrolytic Graphite (HOPG), a fact which partially explains their lower SEY. XPS shows that magnetron sputtered samples exhibit mainly sp2 type bonds. The intensity on the high binding energy side of C1s is found to be related to the value of the SEY. Instead the initial surface concentration of oxygen has no influence on the resulting SEY, when it is below 16%. The thin films produced by P...
Svendsen, Morten Bo Søndergaard; Domenici, Paolo; Marras, Stefano
2016-01-01
Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s(-1) but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish...
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method.
Roux, Benoît; Weare, Jonathan
2013-02-28
An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method.
Theoretical chemistry advances and perspectives
Eyring, Henry
1980-01-01
Theoretical Chemistry: Advances and Perspectives, Volume 5 covers articles concerning all aspects of theoretical chemistry. The book discusses the mean spherical approximation for simple electrolyte solutions; the representation of lattice sums as Mellin-transformed products of theta functions; and the evaluation of two-dimensional lattice sums by number theoretic means. The text also describes an application of contour integration; a lattice model of quantum fluid; as well as the computational aspects of chemical equilibrium in complex systems. Chemists and physicists will find the book usef
Theoretical foundations of the chronometric cosmology.
Segal, I E
1976-03-01
The derivation of the redshift (z)-distance (r) relation in the chronometric theory of the Cosmos is amplified. The basic physical quantities are represented by precisely defined self-adjoint operators in global Hilbert spaces. Computations yielding explicit bounds for the deviation of the theoretical prediction from the relation z = tan(2)(r/2R) (where R denotes the radius of the universe), earlier derived employing less formal procedures, are carried out for: (a) a cut-off plane wave in two dimensions; (b) a scalar spherical wave in four dimensions; (c) the same as (b) with appropriate incorporation of the photon spin. Both this deviation and the (quantum) dispersion in redshift are shown to be unobservably small. A parallel classical treatment is possible and leads to similar results.
Upstream proton cyclotron waves at Venus near solar maximum
Delva, M.; Bertucci, C.; Volwerk, M.; Lundin, R.; Mazelle, C.; Romanelli, N.
2015-01-01
magnetometer data of Venus Express are analyzed for the occurrence of waves at the proton cyclotron frequency in the spacecraft frame in the upstream region of Venus, for conditions of rising solar activity. The data of two Venus years up to the time of highest sunspot number so far (1 Mar 2011 to 31 May 2012) are studied to reveal the properties of the waves and the interplanetary magnetic field (IMF) conditions under which they are observed. In general, waves generated by newborn protons from exospheric hydrogen are observed under quasi- (anti)parallel conditions of the IMF and the solar wind velocity, as is expected from theoretical models. The present study near solar maximum finds significantly more waves than a previous study for solar minimum, with an asymmetry in the wave occurrence, i.e., mainly under antiparallel conditions. The plasma data from the Analyzer of Space Plasmas and Energetic Atoms instrument aboard Venus Express enable analysis of the background solar wind conditions. The prevalence of waves for IMF in direction toward the Sun is related to the stronger southward tilt of the heliospheric current sheet for the rising phase of Solar Cycle 24, i.e., the "bashful ballerina" is responsible for asymmetric background solar wind conditions. The increase of the number of wave occurrences may be explained by a significant increase in the relative density of planetary protons with respect to the solar wind background. An exceptionally low solar wind proton density is observed during the rising phase of Solar Cycle 24. At the same time, higher EUV increases the ionization in the Venus exosphere, resulting in higher supply of energy from a higher number of newborn protons to the wave. We conclude that in addition to quasi- (anti)parallel conditions of the IMF and the solar wind velocity direction, the higher relative density of Venus exospheric protons with respect to the background solar wind proton density is the key parameter for the higher number of
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Jayakumar, M.; Rajavel, M.; Surendran, U.
2016-07-01
A study on the variability of coffee yield of both Coffea arabica and Coffea canephora as influenced by climate parameters (rainfall (RF), maximum temperature (Tmax), minimum temperature (Tmin), and mean relative humidity (RH)) was undertaken at Regional Coffee Research Station, Chundale, Wayanad, Kerala State, India. The result on the coffee yield data of 30 years (1980 to 2009) revealed that the yield of coffee is fluctuating with the variations in climatic parameters. Among the species, productivity was higher for C. canephora coffee than C. arabica in most of the years. Maximum yield of C. canephora (2040 kg ha-1) was recorded in 2003-2004 and there was declining trend of yield noticed in the recent years. Similarly, the maximum yield of C. arabica (1745 kg ha-1) was recorded in 1988-1989 and decreased yield was noticed in the subsequent years till 1997-1998 due to year to year variability in climate. The highest correlation coefficient was found between the yield of C. arabica coffee and maximum temperature during January (0.7) and between C. arabica coffee yield and RH during July (0.4). Yield of C. canephora coffee had highest correlation with maximum temperature, RH and rainfall during February. Statistical regression model between selected climatic parameters and yield of C. arabica and C. canephora coffee was developed to forecast the yield of coffee in Wayanad district in Kerala. The model was validated for years 2010, 2011, and 2012 with the coffee yield data obtained during the years and the prediction was found to be good.
Jayakumar, M.; Rajavel, M.; Surendran, U.
2016-12-01
A study on the variability of coffee yield of both Coffea arabica and Coffea canephora as influenced by climate parameters (rainfall (RF), maximum temperature (Tmax), minimum temperature (Tmin), and mean relative humidity (RH)) was undertaken at Regional Coffee Research Station, Chundale, Wayanad, Kerala State, India. The result on the coffee yield data of 30 years (1980 to 2009) revealed that the yield of coffee is fluctuating with the variations in climatic parameters. Among the species, productivity was higher for C. canephora coffee than C. arabica in most of the years. Maximum yield of C. canephora (2040 kg ha-1) was recorded in 2003-2004 and there was declining trend of yield noticed in the recent years. Similarly, the maximum yield of C. arabica (1745 kg ha-1) was recorded in 1988-1989 and decreased yield was noticed in the subsequent years till 1997-1998 due to year to year variability in climate. The highest correlation coefficient was found between the yield of C. arabica coffee and maximum temperature during January (0.7) and between C. arabica coffee yield and RH during July (0.4). Yield of C. canephora coffee had highest correlation with maximum temperature, RH and rainfall during February. Statistical regression model between selected climatic parameters and yield of C. arabica and C. canephora coffee was developed to forecast the yield of coffee in Wayanad district in Kerala. The model was validated for years 2010, 2011, and 2012 with the coffee yield data obtained during the years and the prediction was found to be good.
Theoretical approaches to elections defining
Natalya V. Lebedeva
2011-01-01
Full Text Available Theoretical approaches to elections defining develop the nature, essence and content of elections, help to determine their place and a role as one of the major national law institutions in democratic system.
Fsusy and Field Theoretical Construction
Sedra, M B
2009-01-01
Following our previous work on fractional spin symmetries (FSS) \\cite{6, 7}, we consider here the construction of field theoretical models that are invariant under the $D=2(1/3,1/3)$ supersymmetric algebra.
Theoretical Foundations of Learning Communities
Jessup-Anger, Jody E.
2015-01-01
This chapter describes the historical and contemporary theoretical underpinnings of learning communities and argues that there is a need for more complex models in conceptualizing and assessing their effectiveness.
Theoretical Studies of Proton Radioactivity
Ldia S Ferreira; Enrico Maglione
2016-01-01
In the paper, we will discuss the most recent theoretical approaches developed by our group, to understand the mechanisms of decay by one proton emission, and the structure and shape of exotic nuclei at the limits of stability.
Euclid's Number-Theoretical Work
Zhang, Shaohua
2009-01-01
The object of this paper is to affirm the number-theoretical role of Euclid and the historical significance of Euclid's algorithm. We give a brief introduction about Euclid's number-theoretical work. Our study is the first to show that Euclid's algorithm is essentially equivalent with Division algorithm which is the basis of Theory of Divisibility. Note also that Euclid's algorithm implies Euclid's first theorem and Euclid's second theorem. Thus, in the nature of things, Euclid's algorithm is the most important number-theoretical work of Euclid. For this reason, we further summarize briefly the influence of Euclid's algorithm. It leads to the conclusion that Euclid's algorithm is the greatest number-theoretical achievement of the age.
THEORETICAL APPROACHES IN INTERNATIONAL RELATIONS ...
plt
understanding of the social dynamics of the world we live in. Theoretical approaches are also instrumental in shaping perceptions of what matters in international politics ... This implies that, as a technique of last resort, the military instrument.
Organisational Learning: Theoretical Shortcomings and Practical Challenges
Jon Aarum Andersen
2014-05-01
Full Text Available This paper addresses two problems related to learning and the use of knowledge at work. The first problem is the theoretical shortcomings stemming from the controversy between three different concepts of ‘organisational learning.’ In order to enhance scholarship in this field the notion that organisations - as organisations - can learn need to be rejected for theoretical and empirical reasons. The metaphorical use of ‘organisational learning’ creates only confusion. Learning is a process and knowledge is the outcome of that process. It is argued that learning and knowledge is only related to individuals. Knowledge is thus the individual capability to draw distinctions, within a domain of action, based on an appreciation of context or theory. Consequently, knowledge becomes organisational when it is created, developed and transmitted to other individuals in the organisation. In a strict sense knowledge becomes organisational when employees use it and act based on generalisations due to the rules and procedures found in their organisation. The gravest problem is practical challenges due to the fact that the emphasis on learning, knowledge and competence of the working force do not materialize in the application of the knowledge acquired. It is evident that employees do not use their increased knowledge. However, we do not know why they do not use it. An enormous waste of money is spent on learning and knowledge in organisations which does not yield what is expected. How can managers act in order to enhance the application of increased knowledge possessed by the workforce?
Theoretical models of ferromagnetic III-V semiconductors
Jungwirth, T.; Sinova, Jairo; Kučera, J.; MacDonald, A. H.
2002-01-01
Recent materials research has advanced the maximum ferromagnetic transition temperature in semiconductors containing magnetic elements toward room temperature. Reaching this goal would make information technology applications of these materials likely. In this article we briefly review the status of work over the past five years which has attempted to achieve a theoretical understanding of these complex magnetic systems. The basic microscopic origins of ferromagnetism in the (III,Mn)V compoun...
Insuring against loss of evidence in game-theoretic probability
Dawid, A Philip; Shafer, Glenn; Shen, Alexander; Vereshchagin, Nikolai; Vovk, Vladimir
2010-01-01
We consider the game-theoretic scenario of testing the performance of Forecaster by Sceptic who gambles against the forecasts. Sceptic's current capital is interpreted as the amount of evidence he has found against Forecaster. Reporting the maximum of Sceptic's capital so far exaggerates the evidence. We characterize the set of all increasing functions that remove the exaggeration. This result can be used for insuring against loss of evidence.
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
A Clustering Method Based on the Maximum Entropy Principle
Edwin Aldana-Bobadilla
2015-01-01
Full Text Available Clustering is an unsupervised process to determine which unlabeled objects in a set share interesting properties. The objects are grouped into k subsets (clusters whose elements optimize a proximity measure. Methods based on information theory have proven to be feasible alternatives. They are based on the assumption that a cluster is one subset with the minimal possible degree of “disorder”. They attempt to minimize the entropy of each cluster. We propose a clustering method based on the maximum entropy principle. Such a method explores the space of all possible probability distributions of the data to find one that maximizes the entropy subject to extra conditions based on prior information about the clusters. The prior information is based on the assumption that the elements of a cluster are “similar” to each other in accordance with some statistical measure. As a consequence of such a principle, those distributions of high entropy that satisfy the conditions are favored over others. Searching the space to find the optimal distribution of object in the clusters represents a hard combinatorial problem, which disallows the use of traditional optimization techniques. Genetic algorithms are a good alternative to solve this problem. We benchmark our method relative to the best theoretical performance, which is given by the Bayes classifier when data are normally distributed, and a multilayer perceptron network, which offers the best practical performance when data are not normal. In general, a supervised classification method will outperform a non-supervised one, since, in the first case, the elements of the classes are known a priori. In what follows, we show that our method’s effectiveness is comparable to a supervised one. This clearly exhibits the superiority of our method.
Chamoux, A; Berthon, P; Laubignat, J F
1996-01-01
Field measurement of the maximal aerobic velocity (MAV) is closely linked to effort-duration then to the used protocol. We construct the relationship between running speed and running-duration logarithm from running world records. It appears a noteworthy point at 4.97 minutes, to be suggested as MAV duration point. By agreement, MAV could be measured on field by a five minute test whatever the sport may be.
Plant genetics: increasing crop yield.
Day, P R
1977-09-30
Cell cultures of crop plants provide new opportunities to recover induced mutations likely to increase crop yield. Approaches include regulating respiration to conserve carbon fixed by photosynthesis, and increasing the nutritive value of seed protein. They depend on devising selecting conditions which only desired mutant cells can survive. Protoplast fusion offers some promise of tapping sources of genetic variation now unavailable because of sterility barriers between species and genera. Difficulties in regenerating cell lines from protoplasts, and plants from cells, still hamper progress but are becoming less severe. Recombinant DNA techniques may allow detection and selection of bacterial cell lines carrying specific DNA sequences. Isolation and amplification of crop plant genes could then lead to ways of transforming plants that will be useful to breeders.
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
Sánchez, Ailen M; Bennett, George N; San, Ka-Yiu
2005-05-01
A novel in vivo method of producing succinate has been developed. A genetically engineered Escherichia coli strain has been constructed to meet the NADH requirement and carbon demand to produce high quantities and yield of succinate by strategically implementing metabolic pathway alterations. Currently, the maximum theoretical succinate yield under strictly anaerobic conditions through the fermentative succinate biosynthesis pathway is limited to one mole per mole of glucose due to NADH limitation. The implemented strategic design involves the construction of a dual succinate synthesis route, which diverts required quantities of NADH through the traditional fermentative pathway and maximizes the carbon converted to succinate by balancing the carbon flux through the fermentative pathway and the glyoxylate pathway (which has less NADH requirement). The synthesis of succinate uses a combination of the two pathways to balance the NADH. Consequently, experimental results indicated that these combined pathways gave the most efficient conversion of glucose to succinate with the highest yield using only 1.25 moles of NADH per mole of succinate in contrast to the sole fermentative pathway, which uses 2 moles of NADH per mole of succinate. A recombinant E. coli strain, SBS550MG, was created by deactivating adhE, ldhA and ack-pta from the central metabolic pathway and by activating the glyoxylate pathway through the inactivation of iclR, which encodes a transcriptional repressor protein of the glyoxylate bypass. The inactivation of these genes in SBS550MG increased the succinate yield from glucose to about 1.6 mol/mol with an average anaerobic productivity rate of 10 mM/h (approximately 0.64 mM/h-OD600). This strain is capable of fermenting high concentrations of glucose in less than 24 h. Additional derepression of the glyxoylate pathway by inactivation of arcA, leading to a strain designated as SBS660MG, did not significantly increase the succinate yield and it decreased
Liu, Jianming; Chan, Siu Hung Joshua; Brock-Nannestad, Theis; Chen, Jun; Lee, Sang Yup; Solem, Christian; Jensen, Peter Ruhdal
2016-07-01
Biocompatible chemistry is gaining increasing attention because of its potential within biotechnology for expanding the repertoire of biological transformations carried out by enzymes. Here we demonstrate how biocompatible chemistry can be used for synthesizing valuable compounds as well as for linking metabolic pathways to achieve redox balance and rescued growth. By comprehensive rerouting of metabolism, activation of respiration, and finally metal ion catalysis, we successfully managed to convert the homolactic bacterium Lactococcus lactis into a homo-diacetyl producer with high titer (95mM or 8.2g/L) and high yield (87% of the theoretical maximum). Subsequently, the pathway was extended to (S,S)-2,3-butanediol (S-BDO) through efficiently linking two metabolic pathways via chemical catalysis. This resulted in efficient homo-S-BDO production with a titer of 74mM (6.7g/L) S-BDO and a yield of 82%. The diacetyl and S-BDO production rates and yields obtained are the highest ever reported, demonstrating the promising combination of metabolic engineering and biocompatible chemistry as well as the great potential of L. lactis as a new production platform.
Araudo, Anabella T; Crilly, Aidan; Blundell, Katherine M
2016-01-01
It has been suggested that relativistic shocks in extragalactic sources may accelerate the highest energy cosmic rays. The maximum energy to which cosmic rays can be accelerated depends on the structure of magnetic turbulence near the shock but recent theoretical advances indicate that relativistic shocks are probably unable to accelerate particles to energies much larger than a PeV. We study the hotspots of powerful radiogalaxies, where electrons accelerated at the termination shock emit synchrotron radiation. The turnover of the synchrotron spectrum is typically observed between infrared and optical frequencies, indicating that the maximum energy of non-thermal electrons accelerated at the shock is < TeV for a canonical magnetic field of ~100 micro Gauss. Based on theoretical considerations we show that this maximum energy cannot be constrained by synchrotron losses as usually assumed, unless the jet density is unreasonably large and most of the jet upstream energy goes to non-thermal particles. We test ...
Modelling the maximum voluntary joint torque/angular velocity relationship in human movement.
Yeadon, Maurice R; King, Mark A; Wilson, Cassie
2006-01-01
The force exerted by a muscle is a function of the activation level and the maximum (tetanic) muscle force. In "maximum" voluntary knee extensions muscle activation is lower for eccentric muscle velocities than for concentric velocities. The aim of this study was to model this "differential activation" in order to calculate the maximum voluntary knee extensor torque as a function of knee angular velocity. Torque data were collected on two subjects during maximal eccentric-concentric knee extensions using an isovelocity dynamometer with crank angular velocities ranging from 50 to 450 degrees s(-1). The theoretical tetanic torque/angular velocity relationship was modelled using a four parameter function comprising two rectangular hyperbolas while the activation/angular velocity relationship was modelled using a three parameter function that rose from submaximal activation for eccentric velocities to full activation for high concentric velocities. The product of these two functions gave a seven parameter function which was fitted to the joint torque/angular velocity data, giving unbiased root mean square differences of 1.9% and 3.3% of the maximum torques achieved. Differential activation accounts for the non-hyperbolic behaviour of the torque/angular velocity data for low concentric velocities. The maximum voluntary knee extensor torque that can be exerted may be modelled accurately as the product of functions defining the maximum torque and the maximum voluntary activation level. Failure to include differential activation considerations when modelling maximal movements will lead to errors in the estimation of joint torque in the eccentric phase and low velocity concentric phase.
Maximum host survival at intermediate parasite infection intensities.
Martin Stjernman
Full Text Available BACKGROUND: Although parasitism has been acknowledged as an important selective force in the evolution of host life histories, studies of fitness effects of parasites in wild populations have yielded mixed results. One reason for this may be that most studies only test for a linear relationship between infection intensity and host fitness. If resistance to parasites is costly, however, fitness may be reduced both for hosts with low infection intensities (cost of resistance and high infection intensities (cost of parasitism, such that individuals with intermediate infection intensities have highest fitness. Under this scenario one would expect a non-linear relationship between infection intensity and fitness. METHODOLOGY/PRINCIPAL FINDINGS: Using data from blue tits (Cyanistes caeruleus in southern Sweden, we investigated the relationship between the intensity of infection of its blood parasite (Haemoproteus majoris and host survival to the following winter. Presence and intensity of parasite infections were determined by microscopy and confirmed using PCR of a 480 bp section of the cytochrome-b-gene. While a linear model suggested no relationship between parasite intensity and survival (F = 0.01, p = 0.94, a non-linear model showed a significant negative quadratic effect (quadratic parasite intensity: F = 4.65, p = 0.032; linear parasite intensity F = 4.47, p = 0.035. Visualization using the cubic spline technique showed maximum survival at intermediate parasite intensities. CONCLUSIONS/SIGNIFICANCE: Our results indicate that failing to recognize the potential for a non-linear relationship between parasite infection intensity and host fitness may lead to the potentially erroneous conclusion that the parasite is harmless to its host. Here we show that high parasite intensities indeed reduced survival, but this effect was masked by reduced survival for birds heavily suppressing their parasite intensities. Reduced survival among hosts with low
The maximum life expectancy for a micro-fabricated diaphragm
Cǎlimǎnescu, Ioan; Stan, Liviu-Constantin; Popa, Viorica
2015-02-01
Micro-fabricated diaphragms can be used to provide pumping action in microvalve and microfluidic applications. The functionality of the microdiaphragm in a wirelessly actuated micropump plays a major role in low-powered device actuation. In developing micropumps and their components, it is becoming an increasing trend to predict the performance before the prototype is fabricated. Because performance prediction allows for an accurate estimation of yield and lifetime, in addition to developing better understanding of the device while taking into account the details of the device structure and second order effects. Hence avoid potential pitfalls in the device operation in a practical environment. The goal of this research is to determine via FEA the life expectancy for a corrugated circular diaphragm made out of an aluminum alloy. The geometry of the diaphragm is given below being generated within SolidWorks 2010, all the calculations were made using Ansys 13TM . The sound design of a micropump is heavily depending on the lifetime expectancy of the working part of the device which is the diaphragm. This will be subjected on cyclic loading and the fatigue will limit the life of this part. Once the diaphragm is breaking, the micropump is no more able to fulfill its scope. Any micropump manufacturer will then be very concerned on the life expectancy from the fatigue point of view of the diaphragms. The diaphragm circular and corrugated and made of Al alloy, showed a very good behavior from the fatigue point of view, the maximum life expectancy being 1.9 years of continuous functioning with 100 cycles per second. This work showed an simple and forward application of FEA analysis methods in order to estimate the fatigue behavior of corrugated circular microdiaphragms.
Characterizing bias correction uncertainty in wheat yield predictions
Ortiz, Andrea Monica; Jones, Julie; Freckleton, Robert; Scaife, Adam
2017-04-01
uncertainty that result from different climate model simulation input and bias correction methods. We simulate wheat yields using a General Linear Model that includes the effects of seasonal maximum temperatures and precipitation, since wheat is sensitive to heat stress during important developmental stages. We use the same statistical model to predict future wheat yields using the recently available bias-corrected simulations of EURO-CORDEX-Adjust. While statistical models are often criticized for their lack of complexity, an advantage is that we are here able to consider only the effect of the choice of climate model, resolution or bias correction method on yield. Initial results using both past and future bias-corrected climate simulations with a process-based model will also be presented. Through these methods, we make recommendations in preparing climate model output for crop models.
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
Influence of Bark Pyrolysis Technology on Yield
ZHAO Yong; YAN Zhen; LIU Yurong; WANG Shu
2006-01-01
With the self-made pyrolysis equipment in miniature,we experimented in different pyrolysis conditions to get different pyrolyzate yields (carbon,vinegar and gas).It proved that with the rise of temperature,the average yield of carbon descends gradually while the yields of vinegar and gas rise gradually.As the temperature rises,the yield of gas increases much more than that of vinegar.When speeding up the rising temperature,yield of carbon goes down while yields of vinegar and gas go up.
Comparison of Fission Product Yields and Their Impact
S. Harrison
2006-02-01
This memorandum describes the Naval Reactors Prime Contractor Team (NRPCT) Space Nuclear Power Program (SNPP) interest in determining the expected fission product yields from a Prometheus-type reactor and assessing the impact of these species on materials found in the fuel element and balance of plant. Theoretical yield calculations using ORIGEN-S and RACER computer models are included in graphical and tabular form in Attachment, with focus on the desired fast neutron spectrum data. The known fission product interaction concerns are the corrosive attack of iron- and nickel-based alloys by volatile fission products, such as cesium, tellurium, and iodine, and the radiological transmutation of krypton-85 in the coolant to rubidium-85, a potentially corrosive agent to the coolant system metal piping.
Microscale rheology of a soft glassy material close to yielding.
Jop, Pierre; Mansard, Vincent; Chaudhuri, Pinaki; Bocquet, Lydéric; Colin, Annie
2012-04-06
Using confocal microscopy, we study the flow of a model soft glassy material: a concentrated emulsion. We demonstrate the micro-macro link between in situ measured movements of droplets during the flow and the macroscopic rheological response of a concentrated emulsion, in the form of scaling relationships connecting the rheological "fluidity" with local standard deviation of the strain-rate tensor. Furthermore, we measure correlations between these local fluctuations, thereby extracting a correlation length which increases while approaching the yielding transition, in accordance with recent theoretical predictions. © 2012 American Physical Society
Thomas, Catherine [Paris-11 Univ., 91 Orsay (France)
2000-01-19
Theoretical models have shown that the maximum magnetic field in radio frequency superconducting cavities is the superheating field H{sub sh}. For niobium, H{sub sh} is 25 - 30% higher than the thermodynamical H{sub c} field: H{sub sh} within (240 - 274) mT. However, the maximum magnetic field observed so far is in the range H{sub c,max} = 152 mT for the best 1.3 GHz Nb cavities. This field is lower than the critical field H{sub c1} above which the superconductor breaks up into divided normal and superconducting zones (H{sub c1}{<=}H{sub c}). Thermal instabilities are responsible for this low value. In order to reach H{sub sh} before thermal breakdown, high power short pulses are used. The cavity needs then to be strongly over-coupled. The dedicated test bed has been built from the collaboration between Istituto Nazionale di Fisica Nucleare (INFN) - Sezione di Genoa, and the Service d'Etudes et Realisation d'Accelerateurs (SERA) of Laboratoire de l'Accelerateur Lineaire (LAL). The maximum magnetic field, H{sub rf,max}, measurements on INFN cavities give lower results than the theoretical speculations and are in agreement with previous results. The superheating magnetic fields is linked to the magnetic penetration depth. This superconducting characteristic length can be used to determine the quality of niobium through the ratio between the resistivity measured at 300 K and 4.2 K in the normal conducting state (RRR). Results have been compared to previous ones and agree pretty well. They show that the RRR measured on cavities is superficial and lower than the RRR measured on samples which concerns the volume. (author)
Yield and yield gaps in central U.S. corn production systems
The magnitude of yield gaps (YG) (potential yield – farmer yield) provides some indication of the prospects for increasing crop yield. Quantile regression analysis was applied to county maize (Zea mays L.) yields (1972 – 2011) from Kentucky, Iowa and Nebraska (irrigated) (total of 115 counties) to e...
Theoretical behaviorism meets embodied cognition : Two theoretical analyses of behavior
Keijzer, F.A.
2005-01-01
This paper aims to do three things: First, to provide a review of John Staddon's book Adaptive dynamics: The theoretical analysis of behavior. Second, to compare Staddon's behaviorist view with current ideas on embodied cognition. Third, to use this comparison to explicate some outlines for a theore
Theoretical behaviorism meets embodied cognition : Two theoretical analyses of behavior
Keijzer, F.A.
2005-01-01
This paper aims to do three things: First, to provide a review of John Staddon's book Adaptive dynamics: The theoretical analysis of behavior. Second, to compare Staddon's behaviorist view with current ideas on embodied cognition. Third, to use this comparison to explicate some outlines for a theore
CsI(Tl) infrared scintillation light yield and spectrum
Belogurov, S; Carugno, Giovanni; Conti, E; Iannuzzi, D; Meneguzzo, Anna Teresa
2000-01-01
Infrared emission from CsI(Tl) excited by approx 70 keV electrons was detected with an InGaAs PIN photodiode. Some parameters of infrared scintillation were studied. The emission spectrum is located between 1.55 and 1.70 mu m with a maximum at 1.60 mu m. The light yield of infrared scintillation is (4.9+-0.3)x10 sup 3 photons/MeV. Infrared scintillation caused by 3 MeV alpha-particles is detected as well.
On the absolute value of the air-fluorescence yield
Rosado, J; Arqueros, F
2014-01-01
The absolute value of the air-fluorescence yield is a key parameter for the energy reconstruction of extensive air showers registered by fluorescence telescopes. In previous publications, we reported a detailed Monte Carlo simulation of the air-fluorescence generation that allowed the theoretical evaluation of this parameter. This simulation has been upgraded in the present work. As a result, we determined an updated absolute value of the fluorescence yield of 7.9+-2.0 ph/MeV for the band at 337 nm in dry air at 800 hPa and 293 K, in agreement with experimental values. We have also performed a critical analysis of available absolute measurements of the fluorescence yield with the assistance of our simulation. Corrections have been applied to some measurements to account for a bias in the evaluation of the energy deposition. Possible effects of other experimental aspects have also been discussed. From this analysis, we determined an average fluorescence yield of 7.04+-0.24 ph/MeV at the above conditions.
Phenomenology of muon-induced neutron yield
Malgin, A. S.
2017-07-01
The cosmogenic neutron yield Yn characterizes the ability of matter to produce neutrons under the effect of cosmic ray muons with spectrum and average energy corresponding to an observation depth. The yield is the basic characteristic of cosmogenic neutrons. The neutron production rate and neutron flux both are derivatives of the yield. The constancy of the exponents α and β in the known dependencies of the yield on energy Yn∝Eμα and the atomic weight Yn∝Aβ allows one to combine these dependencies in a single formula and to connect the yield with muon energy loss in matter. As a result, the phenomenological formulas for the yields of muon-induced charged pions and neutrons can be obtained. These expressions both are associated with nuclear loss of the ultrarelativistic muons, which provides the main contribution to the total neutron yield. The total yield can be described by a universal formula, which is the best fit of the experimental data.
Climate change: implications for the yield of edible rice.
Zhao, Xiangqian; Fitzgerald, Melissa
2013-01-01
Global warming affects not only rice yield but also grain quality. A better understanding of the effects of climate factors on rice quality provides information for new breeding strategies to develop varieties of rice adapted to a changing world. Chalkiness is a key trait of physical quality, and along with head rice yield, is used to determine the price of rice in all markets. In the present study, we show that for every ∼1% decrease in chalkiness, an increase of ∼1% in head rice yield follows, illustrating the dual impact of chalk on amount of marketable rice and its value. Previous studies in controlled growing conditions report that chalkiness is associated with high temperature. From 1980-2009 at IRRI, Los Baños, the Philippines, annual minimum and mean temperatures, and diurnal variation changed significantly. The objective of this study was to determine how climate impacts chalkiness in field conditions over four wet and dry seasons. We show that low relative humidity and a high vapour pressure deficit in the dry season associate with low chalk and high head rice yield in spite of higher maximum temperature, but in the opposite conditions of the wet season, chalk is high and head rice yield is low. The data therefore suggest that transpirational cooling is a key factor affecting chalkiness and head rice yield, and global warming per se might not be the major factor that decreases the amount and quality of rice, but other climate factors in combination, that enable the crop to maintain a cool canopy.
Climate change: implications for the yield of edible rice.
Xiangqian Zhao
Full Text Available Global warming affects not only rice yield but also grain quality. A better understanding of the effects of climate factors on rice quality provides information for new breeding strategies to develop varieties of rice adapted to a changing world. Chalkiness is a key trait of physical quality, and along with head rice yield, is used to determine the price of rice in all markets. In the present study, we show that for every ∼1% decrease in chalkiness, an increase of ∼1% in head rice yield follows, illustrating the dual impact of chalk on amount of marketable rice and its value. Previous studies in controlled growing conditions report that chalkiness is associated with high temperature. From 1980-2009 at IRRI, Los Baños, the Philippines, annual minimum and mean temperatures, and diurnal variation changed significantly. The objective of this study was to determine how climate impacts chalkiness in field conditions over four wet and dry seasons. We show that low relative humidity and a high vapour pressure deficit in the dry season associate with low chalk and high head rice yield in spite of higher maximum temperature, but in the opposite conditions of the wet season, chalk is high and head rice yield is low. The data therefore suggest that transpirational cooling is a key factor affecting chalkiness and head rice yield, and global warming per se might not be the major factor that decreases the amount and quality of rice, but other climate factors in combination, that enable the crop to maintain a cool canopy.
Mechanics lectures on theoretical physics
Sommerfeld, Arnold Johannes Wilhelm
1952-01-01
Mechanics: Lectures on Theoretical Physics, Volume I covers a general course on theoretical physics. The book discusses the mechanics of a particle; the mechanics of systems; the principle of virtual work; and d'alembert's principle. The text also describes oscillation problems; the kinematics, statics, and dynamics of a rigid body; the theory of relative motion; and the integral variational principles of mechanics. Lagrange's equations for generalized coordinates and the theory of Hamilton are also considered. Physicists, mathematicians, and students taking Physics courses will find the book