NBI and NBI Combined with Magnifying Colonoscopy
Mineo Iwatate
2012-01-01
Full Text Available Although magnifying chromoendoscopy had been a reliable diagnostic tool, narrow-band imaging (NBI has been developed in Japan since 1999 and has now replaced the major role of chromoendoscopy because of its convenience and simplicity. In this paper, we principally describe the efficacy of magnifying chromoendoscopy and magnifying colonoscopy with NBI for detection, histological prediction, estimation of the depth of early colorectal cancer, and future prospects. Although some meta-analyses have concluded that NBI is not superior to white light imaging for detection of adenomatous polyps in screening colonoscopy, NBI with magnification colonoscopy is useful for histological prediction, or for estimating the depth of invasion. To standardize these diagnostic strategies, we will focus on the NBI International Colorectal Endoscopic (NICE classification proposed for use by endoscopists with or without a magnifying endoscope. However, more prospective research is needed to prove that this classification can be applied with satisfactory availability, feasibility, and reliability. In the future, NBI might contribute to the evaluation of real-time histological prediction during colonoscopy, which has substantial benefits for both reducing the risk of polypectomy and saving the cost of histological evaluation by resecting and discarding diminutive adenomatous polyps (resect and discard strategy.
National Bridge Inventory (NBI) Bridges
Department of Homeland Security — The NBI is a collection of information (database) describing the more than 600,000 of the Nation's bridges located on public roads, including Interstate Highways,...
A New VMAT-2 Inhibitor NBI-641449 in the Treatment of Huntington Disease.
Chen, Sheng; Zhang, Xiao-Jie; Xie, Wen-Jie; Qiu, Hong-Yan; Liu, Hui; Le, Wei-Dong
2015-08-01
To evaluate the effectiveness of a new VMAT-2 inhibitor NBI-641449 in controlling hyperkinetic movements of Huntington disease (HD) and to investigate its possible therapeutic effects. We applied three different doses of NBI-641449 (1, 10, 100 mg/kg/day) for 2 weeks in 4-month-old YAC128 mice and wild-type (WT) mice. Rotarod performance and locomotive activities were tested during the administration of the drug. The concentration of dopamine (DA) and its metabolites was quantified in the striatal tissues by high-performance liquid chromatography (HPLC). Neuron survival in striatum and huntingtin protein aggregates were assessed with immunostaining. Expression levels of endoplasmic reticulum (ER) stress proteins were detected by immunoblotting. Rotarod performance was significantly improved after treatment with low or middle dose of NBI-641449 in YAC128 mice. Open field test showed that NBI-641449 treatment could attenuate the increased horizontal activity (HACTV), total vertical movement, moving time, and moving distance in YAC128 mice. High dose of NBI-641449 might cause sedative effects in WT and YAC128 mice. HPLC showed that NBI-641449 caused a dose-dependent decrease of DA, 3,4-dihydroxyphenylacetic acid, and homovanillic acid levels in the striatum. NeuN and DARPP-32 immunostaining revealed that NBI-641449 had no significant effect on the neuron survival in the striatum. However, NBI-641449 treatment reduced the huntingtin protein aggregates in the cortex of YAC128 mice. In addition, the levels of ER stress proteins were increased in YAC128 mice, which can be suppressed by NBI-641449. These findings suggest that this new VMAT-2 inhibitor NBI-641449 may have therapeutic potential for the treatment of HD. © 2015 John Wiley & Sons Ltd.
Conceptual design of NBI beamline for VEST plasma heating
Kim, T.S., E-mail: tskim@kaeri.re.kr; In, S.R.; Jeong, S.H.; Park, M.; Chang, D.H.; Jung, B.K.; Lee, K.W.
2016-11-01
Highlights: • VEST NBI injector is conceptually designed to support further VEST plasma experiment. • VEST NBI injector composed of 2 sets of 20 keV/25A magnetic cusp type bucket ion source, neutralizer ducts, electrostatic ion dumps, NB vessel with cryopump, and rotating calorimerter. • The vacuum vessel of the beamline is divided into two parts for high injection efficiency and different direction (co- and counter-current) of neutral beam injection. • An ion source for the VEST NBI system was also designed to deliver neutral hydrogen beams with a power of 0.3 MW. The plasma generator of the VEST NB ion source has modified TFTR bucket multi-cusp chamber. The plasma generator has twelve hair-pin shaped tungsten filaments used as a cathode and an arc chamber including a bucket and an electron dump which serve as anode. The accelerator system consists of three grids, each having extraction area of 100 mm × 320 mm and 64 shaped slits of 3 mm spacing. • The preliminary structure design and the layout of the main components of the injector have been completed. Simulation and calculation for optimization of the NB beamline design results prove that the parameters of ion source, neutralization efficiency (76%:95% equilibrium neutralization efficiency), and beam power transmission efficiency (higher than 90%) are in agreement with design targets of the VEST NB beamline. • This VEST NBI system will provide a neutral beam of ∼0.6 MW for both heating and current drive in torus plasma. - Abstract: A 10 m s-pulsed NBI (Neutral Beam Injection) system for VEST (Versatile Experiment Spherical Torus) plasma heating is designed to provide a beam power of more than 0.6 MW with 20 keV H° neutrals. The VEST NBI injector is composed of 2 sets of 20 keV/25A magnetic cusp type bucket ion source, neutralizer ducts, residual ion dump, NB vessel with a cryopump, and rotating calorimeter. The position and size of these beamline components are roughly determined with geometric
Maximum Photovoltaic Penetration Levels on Typical Distribution Feeders: Preprint
Hoke, A.; Butler, R.; Hambrick, J.; Kroposki, B.
2012-07-01
This paper presents simulation results for a taxonomy of typical distribution feeders with various levels of photovoltaic (PV) penetration. For each of the 16 feeders simulated, the maximum PV penetration that did not result in steady-state voltage or current violation is presented for several PV location scenarios: clustered near the feeder source, clustered near the midpoint of the feeder, clustered near the end of the feeder, randomly located, and evenly distributed. In addition, the maximum level of PV is presented for single, large PV systems at each location. Maximum PV penetration was determined by requiring that feeder voltages stay within ANSI Range A and that feeder currents stay within the ranges determined by overcurrent protection devices. Simulations were run in GridLAB-D using hourly time steps over a year with randomized load profiles based on utility data and typical meteorological year weather data. For 86% of the cases simulated, maximum PV penetration was at least 30% of peak load.
21 CFR 801.415 - Maximum acceptable level of ozone.
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Maximum acceptable level of ozone. 801.415 Section... level of ozone. (a) Ozone is a toxic gas with no known useful medical application in specific, adjunctive, or preventive therapy. In order for ozone to be effective as a germicide, it must be present in...
Design of Timing System Software on EAST-NBI
Zhao, Yuan-Zhe; Hu, Chun-Dong; Sheng, Peng; Zhang, Xiao-Dan; Wu, De-Yun; Cui, Qing-Long
2013-10-01
Neutral Beam Injector (NBI) is one of the main plasma heating and plasma current driving methods for Experimental Advanced Superconducting Tokomaks. In order to monitor the NBI experiment, control all the power supply, realize data acquisition and network, the control system is designed. As an important part of NBI control system, timing system (TS) provides a unified clock for all subsystems of NBI. TS controls the input/output services of digital signals and analog signals. It sends feedback message to the control server which is the function of alarm and interlock protection. The TS software runs on a Windows system and uses Labview language code while using client/server mode, multithreading and cyclic redundancy check technology. The experimental results have proved that TS provides a stability and reliability clock to the subsystems of NBI and contributed to the safety of the whole NBI system.
Forecasting ozone daily maximum levels at Santiago, Chile
Jorquera, Héctor; Pérez, Ricardo; Cipriano, Aldo; Espejo, Andrés; Victoria Letelier, M.; Acuña, Gonzalo
In major urban areas, air pollution impact on health is serious enough to include it in the group of meteorological variables that are forecast daily. This work focusses on the comparison of different forecasting systems for daily maximum ozone levels at Santiago, Chile. The modelling tools used for these systems were linear time series, artificial neural networks and fuzzy models. The structure of the forecasting model was derived from basic principles and it includes a combination of persistence and daily maximum air temperature as input variables. Assessment of the models is based on two indices: their ability to forecast well an episode, and their tendency to forecast an episode that did not occur at the end (a false positive). All the models tried in this work showed good forecasting performance, with 70-95% of successful forecasts at two monitor sites: Downtown (moderate impacts) and Eastern (downwind, highest impacts). The number of false positives was not negligible, but this may be improved by expressing the forecast in broad classes: low, average, high, very high impacts; the fuzzy model was the most reliable forecast, with the lowest number of false positives among the different models evaluated. The quality of the results and the dynamics of ozone formation suggest the use of a forecast to warn people about excessive exposure during episodic days at Santiago.
Sphaleron glueballs in NBI theory with symmetrized trace
Dyadichev, V V
2000-01-01
We derive a closed expression for the SU(2) Born-Infeld action with the symmetrized trace for static spherically symmetric purely magnetic configurations. The lagrangian is obtained in terms of elementary functions. Using it, we investigate glueball solutions to the flat space NBI theory and their self-gravitating counterparts. Such solutions, found previously in the NBI model with the 'square root - ordinary trace' lagrangian, are shown to persist in the theory with the symmetrized trace lagrangian as well. Although the symmetrized trace NBI equations differ substantially from those of the theory with the ordinary trace, a qualitative picture of glueballs remains essentially the same. Gravity further reduces the difference between solutions in these two models, and, for sufficiently large values of the effective gravitational coupling, solutions tends to the same limiting form. The black holes in the NBI theory with the symmetrized trace are also discussed.
Design of ITER NBI power supply system
Watanabe, Kazuhiro; Ohara, Yoshihiro; Okumura, Yoshikazu [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment; Higa, Osamu; Kawashima, Syuichi; Ono, Youichi; Tanaka, Masanobu; Yasutomi, Sei
1997-07-01
Power supply system for the ITER neutral beam injector (NBI) whose total injection power is 1 MeV, 50 MW from three modules, has been designed. The power supply system consists of a source power supply for negative ion production/extraction and a DC 1 MV, 45 A power supply for negative ion acceleration. An inverter controlled multi-transformer/rectifier system has been adopted to the acceleration power supply. An inverter frequency of 150 Hz was selected to satisfy required specifications which are rise time of <100 ms, voltage ripple of <10% peak to peak and cut off speed of <200{mu}s. It was confirmed that the rise time, the ripple and the cut off speed is about 50 ms, 7% and <200{mu}s respectively by computation. It was also confirmed that a surge current and an energy input to the ion source at the breakdown can be suppressed lower than 3 kA and 10 J, which are considered to be lower than allowable values. A 1 MV transmission line has been designed from a view point of electric field on the inner conductors and grounded conductor. The results from the design study indicate that all the required specification to the power supply system can be satisfied and that R and D on the transmission line is one of the most important subjects. (author)
40 CFR 141.11 - Maximum contaminant levels for inorganic chemicals.
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant levels for inorganic chemicals. 141.11 Section 141.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Maximum Contaminant Levels § 141.11 Maximum contaminant levels...
40 CFR 141.62 - Maximum contaminant levels for inorganic contaminants.
2010-07-01
... (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS National Primary Drinking Water Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.62 Maximum..., maintenance, and monitoring must be provided by the water system to ensure adequate performance. 5 Unlikely to...
Investigation of HHFW and NBI Combined Heating in NSTX
B.P. LeBlanc; R.E. Bell; S. Bernabei; T.M. Biewer; J.C. Hosea; J.R. Wilson
2005-04-27
A series of experiments was conducted to investigate the combined utilization of high-harmonics fast-wave (HHFW) and neutral-beam injection (NBI) auxiliary heating in National Spherical Torus Experiment (NSTX) plasmas. A modest increase of the total stored energy coincident with a near doubling of the neutron production rate is observed when NBI heating is added to HHFW in L-mode plasmas. An increase in the core electron temperature is also observed. On the other hand, essentially no stored energy augmentation nor neutron production rate enhancement is observed when applying HHFW during the ''H'' phase of NBI-driven H-mode plasmas. Spectroscopic measurements of the edge carbon line radiation indicate an unpredicted ion temperature increase, suggesting that edge effects are reducing the amount of HHFW power reaching the plasma core.
40 CFR 142.61 - Variances from the maximum contaminant level for fluoride.
2010-07-01
... level for fluoride. 142.61 Section 142.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... from the maximum contaminant level for fluoride. (a) The Administrator, pursuant to section 1415(a)(1... means generally available for achieving compliance with the Maximum Contaminant Level for fluoride. (1...
The analysis and kinetic energy balance of an upper-level wind maximum during intense convection
Fuelberg, H. E.; Jedlovec, G. J.
1982-01-01
The purpose of this paper is to analyze the formation and maintenance of the upper-level wind maximum which formed between 1800 and 2100 GMT, April 10, 1979, during the AVE-SESAME I period, when intense storms and tornadoes were experienced (the Red River Valley tornado outbreak). Radiosonde stations participating in AVE-SESAME I are plotted (centered on Oklahoma). National Meteorological Center radar summaries near the times of maximum convective activity are mapped, and height and isotach plots are given, where the formation of an upper-level wind maximum over Oklahoma is the most significant feature at 300 mb. The energy balance of the storm region is seen to change dramatically as the wind maximum forms. During much of its lifetime, the upper-level wind maximum is maintained by ageostrophic flow that produces cross-contour generation of kinetic energy and by the upward transport of midtropospheric energy. Two possible mechanisms for the ageostrophic flow are considered.
Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates
Lee, Sik-Yum; Song, Xin-Yuan
2005-01-01
In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…
Scoping studies for NBI launch geometries on DEMO
Jenkins, I., E-mail: ian.jenkins@ukaea.uk; Challis, C.D.; Keeling, D.L.; Surrey, E.
2016-05-15
Highlights: • NBCD scans are done for beam energies of 1.5 MeV and 1.0 MeV in two DEMO scenarios. • NBCD scan profiles are fed into genetic algorithm to fit a target current profile. • The result gives location and power of sources to give best fit to target profile. • This method can help provide requirements for DEMO beamline geometry. - Abstract: Engineering and technical constraints on Neutral Beam Injection (NBI) in DEMO may determine the available beam energy and may also strongly impact the Neutral Beam Current Drive (NBCD) efficiency by restricting available beam tangential radii. These latter are determined by factors such as the inter-TF coil spacing, as well as the degree of required shielding. In order to illustrate how these factors may affect the contribution of NBCD on DEMO operating scenarios, scans of NBI tangency radii and elevation on two possible DEMO scenarios have been performed with two beam energies, 1.5 MeV and 1.0 MeV, in order to determine the most favourable options for NBCD efficiency. In addition, a method using a genetic algorithm has been used to seek optimised solutions of NBI source locations and powers to attempt to synthesize a target total plasma driven-current profile. It is found that certain beam trajectories may be proscribed by limitations on shinethrough onto the vessel wall. This may affect the ability of NBCD to extend the duration of a pulse in a scenario where it must complement the induced plasma current. Operating at the lower beam energy reduces the restrictions due to shinethrough and is attractive for technical reasons as it will required less development, but in the scenarios examined here this results in a spatial broadening of the NBCD profile, which may make it more challenging to achieve desired total driven-current profiles.
40 CFR 141.53 - Maximum contaminant level goals for disinfection byproducts.
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant level goals for disinfection byproducts. 141.53 Section 141.53 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... disinfection byproducts. MCLGs for the following disinfection byproducts are as indicated: Disinfection...
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach
Sohail, Muhammad Sadiq
2012-06-01
This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.
Active contours for localizing polyps in colonoscopic NBI image data
Breier, Matthias; Gross, Sebastian; Behrens, Alexander; Stehle, Thomas; Aach, Til
2011-03-01
Colon cancer is the third most common type of cancer in the United States of America. Every year about 140,000 people are newly diagnosed with colon cancer. Early detection is crucial for a successful therapy. The standard screening procedure is called colonoscopy. Using this endoscopic examination physicians can find colon polyps and remove them if necessary. Adenomatous colon polyps are deemed a preliminary stage of colon cancer. The removal of a polyp, though, can lead to complications like severe bleedings or colon perforation. Thus, only polyps diagnosed as adenomatous should be removed. To decide whether a polyp is adenomatous the polyp's surface structure including vascular patterns has to be inspected. Narrow-Band imaging (NBI) is a new tool to improve visibility of vascular patterns of the polyps. The first step for an automatic polyp classification system is the localization of the polyp. We investigate active contours for the localization of colon polyps in NBI image data. The shape of polyps, though roughly approximated by an elliptic form, is highly variable. Active contours offer the flexibility to adapt to polyp variation well. To avoid clustering of contour polygon points we propose the application of active rays. The quality of the results was evaluated based on manually segmented polyps as ground truth data. The results were compared to a template matching approach and to the Generalized Hough Transform. Active contours are superior to the Hough transform and perform equally well as the template matching approach.
Scoping Studies for NBI Launch Geometries on DEMO
Jenkins, I; Keeling, D L; Surrey, E
2014-01-01
Scans of Neutral Beam Injection (NBI) tangency radii and elevation on two possible DEMO scenarios have been performed with two beam energies, 1.5MeV and 1.0MeV, in order to determine the most favourable options for Neutral Beam Current Drive (NBCD) efficiency. In addition, a method using a genetic algorithm has been used to seek optimised solutions of NBI source locations and powers to synthesize a target total plasma driven-current profile. It is found that certain beam trajectories may be proscribed by limitations on shinethrough onto the vessel wall. This may affect the ability of NBCD to extend the duration of a pulse in a scenario where it must complement the induced plasma current. Operating at the lower beam energy reduces the restrictions due to shinethrough and is attractive for technical reasons, but in the scenarios examined here this results in a spatial broadening of the NBCD profile, which may make it more challenging to achieve desired total driven-current profiles.
25(OHD3 Levels Relative to Muscle Strength and Maximum Oxygen Uptake in Athletes
Książek Anna
2016-04-01
Full Text Available Vitamin D is mainly known for its effects on the bone and calcium metabolism. The discovery of Vitamin D receptors in many extraskeletal cells suggests that it may also play a significant role in other organs and systems. The aim of our study was to assess the relationship between 25(OHD3 levels, lower limb isokinetic strength and maximum oxygen uptake in well-trained professional football players. We enrolled 43 Polish premier league soccer players. The mean age was 22.7±5.3 years. Our study showed decreased serum 25(OHD3 levels in 74.4% of the professional players. The results also demonstrated a lack of statistically significant correlation between 25(OHD3 levels and lower limb muscle strength with the exception of peak torque of the left knee extensors at an angular velocity of 150°/s (r=0.41. No significant correlations were found between hand grip strength and maximum oxygen uptake. Based on our study we concluded that in well-trained professional soccer players, there was no correlation between serum levels of 25(OHD3 and muscle strength or maximum oxygen uptake.
Yudong Zhang
2011-04-01
Full Text Available This paper proposes a global multi-level thresholding method for image segmentation. As a criterion for this, the traditional method uses the Shannon entropy, originated from information theory, considering the gray level image histogram as a probability distribution, while we applied the Tsallis entropy as a general information theory entropy formalism. For the algorithm, we used the artificial bee colony approach since execution of an exhaustive algorithm would be too time-consuming. The experiments demonstrate that: 1 the Tsallis entropy is superior to traditional maximum entropy thresholding, maximum between class variance thresholding, and minimum cross entropy thresholding; 2 the artificial bee colony is more rapid than either genetic algorithm or particle swarm optimization. Therefore, our approach is effective and rapid.
Selective preparation of the maximum coherent superposition state in four-level atoms
Li Deng; Yueping Niu; Shangqing Gong
2011-01-01
We demonstrate that the maximum coherent superposition state can be selectively prepared using a sequence of pulse pairs in lambda-type atomic systems, with the final level as a doublet. In each pair, the Stocks pulse comes before the pump pulse, with their back edges overlapping. Numerical results indicate that by tuning the interval of the adjacent pulse pairs, the selective maximum coherent superposition state preparation between the initial and one of the final levels can be achieved. The phenomenon is caused by the accumulative property of the pulse sequence.%The coherent superposition state in atoms or molecules plays a crucial role in quantum physics.It has applications in many areas such as electromagnetically induced transparency[1-5],quantum information[6-8] and control of chemical reaction[9-11].Many schemes can prepare the coherent superposition state.For instance,the fractional stimulated Raman adiabatic passage(F-STIRAP) [12] and the coherent population trapping[13] can obtain the maximum coherent superposition state of the two lower levels in lambda-type atoms.Our group also proposed several schemes to achieve this goal,such as the methods based on the STIRAP[14,15] and the pulse train method[16].
Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan
2013-01-01
This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
The role of narrow-band imaging (NBI) endoscopy in optical biopsy of vocal cord leukoplakia.
Staníková, L; Šatanková, J; Kučová, H; Walderová, R; Zeleník, K; Komínek, Pavel
2017-01-01
The aim of this study was to investigate whether observing microvascular changes by narrow-band imaging (NBI) endoscopy in the area surrounding leukoplakia is sufficient for discriminating between benign and malignant patterns of vocal cord leukoplakia. A total of 282 patients were investigated using white-light high-definition TV laryngoscopy and NBI endoscopy from 6/2013 to 8/2015, and 63 patients with a primary case of laryngeal leukoplakia were enrolled. Patients were divided into two groups based on leukoplakia with surrounding malignant intraepithelial papillary capillary loops (group I; 26/63) and leukoplakia with a surrounding benign vascular network (group II; 37/63), both by NBI endoscopy. All 63 patients were evaluated by blinded histological examination, and results were compared with NBI optical biopsy. Carcinoma in situ or invasive squamous cell carcinoma was confirmed in 22/26 cases (84.6 %) in group I. Hyperkeratosis or low-grade dysplasia was confirmed histologically in 31/37 (83.8 %) and squamous cell carcinoma in 2/37 (5.4 %) cases in group II. Accordance of NBI endoscopy and histopathological features of vocal cord leukoplakia lesions was statistically significant (kappa index 0.77, p leukoplakias based on optic prehistological diagnosis. The close accordance between NBI features and histological results suggests that a negative NBI endoscopy may be an indication for long-term endoscopy follow-up without histological evaluation.
Maria Pilar Martínez Ruiz
2010-12-01
Full Text Available From the initial consideration of the store attributes that the marketing literature has identified as key in order that grocery retailers manage to design their differentiation strategies, this work identifies the main factors underlying the above mentioned attributes. The goal is to analyze which of these factors exert a bigger influence on the highest level of customer satisfaction. With this intention, we have examined a sample of 422 consumers who had carried out their purchase in different types of store formats in Spain, considering the influence of feature advertising on the clientele behavior. Interesting conclusions related to the aspects that most impact on the maximum level of customer satisfaction depending on the influence of feature advertising stem from this work.
Measurement of HL-2A NBI Beam Profile and Beam Power
LIU He; CAO Jianyong; JIANG Shaofeng; LUO Cuiwen; TANG Lixin; LEI Guangjiu; RAO Jun; LI Bo
2009-01-01
To optimize the operation parameters of the beam line of NBI on HL-2A,features of the beam line,including the beam profile and the power deposited on components and injected into the tokamak plasma,were measured.The operational parameters of the four sources on the beam line were optimized with the monitor of the beam profile and beam power,and the transmission efficiency of the NBI injected power was therefore increased.A beam diagnostic system for the beam line of the NBI system on HL-2A as well as the diagnosed results was also presented.
DONG Sheng; CHI Kun; ZHANG Qiyi; ZHANG Xiangdong
2012-01-01
Compared with traditional real-time forecasting,this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area.The GMM combines the Grey System and Markov theory into a higher precision model.The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values,and thus gives forecast results involving two aspects of information.The procedure for forecasting annul maximum water levels with the GMM contains five main steps:1) establish the GM (1,1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2,and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy.The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin,China are utilized to calibrate and verify the proposed model according to the above steps.Every 25 years' data are regarded as a hydro-sequence.Eight groups of simulated results show reasonable agreement between the predicted values and the measured data.The GMM is also applied to the 10 other hydrological stations in the same estuary.The forecast results for all of the hydrological stations are good or acceptable.The feasibility and effectiveness of this new forecasting model have been proved in this paper.
Mazza, Gina L; Enders, Craig K; Ruehlman, Linda S
2015-01-01
Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002 ; Graham, 2009 ; Enders, 2010 ). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe a full information maximum likelihood (FIML) approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program.
Wenchao Cui
2013-01-01
Full Text Available This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP and Bayes’ rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
Disproportionate Allocation of Indirect Costs at Individual-Farm Level Using Maximum Entropy
Markus Lips
2017-08-01
Full Text Available This paper addresses the allocation of indirect or joint costs among farm enterprises, and elaborates two maximum entropy models, the basic CoreModel and the InequalityModel, which additionally includes inequality restrictions in order to incorporate knowledge from production technology. Representing the indirect costing approach, both models address the individual-farm level and use standard costs from farm-management literature as allocation bases. They provide a disproportionate allocation, with the distinctive feature that enterprises with large allocation bases face stronger adjustments than enterprises with small ones, approximating indirect costing with reality. Based on crop-farm observations from the Swiss Farm Accountancy Data Network (FADN, including up to 36 observations per enterprise, both models are compared with a proportional allocation as reference base. The mean differences of the enterprise’s allocated labour inputs and machinery costs are in a range of up to ±35% and ±20% for the CoreModel and InequalityModel, respectively. We conclude that the choice of allocation methods has a strong influence on the resulting indirect costs. Furthermore, the application of inequality restrictions is a precondition to make the merits of the maximum entropy principle accessible for the allocation of indirect costs.
Jafarizadeh, M A; Sabric, H; Malekic, B Rashidian
2011-01-01
In this paper,a systematic study of quantum phase transition within U(5) \\leftrightarrow SO(6) limits is presented in terms of infinite dimensional Algebraic technique in the IBM framework. Energy level statistics are investigated with Maximum Likelihood Estimation (MLE) method in order to characterize transitional region. Eigenvalues of these systems are obtained by solving Bethe-Ansatz equations with least square fitting processes to experimental data to obtain constants of Hamiltonian. Our obtained results verify the dependence of Nearest Neighbor Spacing Distribution's (NNSD) parameter to control parameter (c_{s}) and also display chaotic behavior of transitional regions in comparing with both limits. In order to compare our results for two limits with both GUE and GOE ensembles, we have suggested a new NNSD distribution and have obtained better KLD distances for the new distribution in compared with others in both limits. Also in the case of N\\to\\infty, the total boson number dependence displays the univ...
Design of the RF ion source for the ITER NBI
Marcuzzi, D. [Consorzio RFX, Euratom-ENEA Association, Corso Stati Uniti 4, I-35127 Padova (Italy)], E-mail: diego.marcuzzi@igi.cnr.it; Agostinetti, P.; Dalla Palma, M. [Consorzio RFX, Euratom-ENEA Association, Corso Stati Uniti 4, I-35127 Padova (Italy); Falter, H.D.; Heinemann, B.; Riedl, R. [Max-Planck-Institut fuer Plasmaphysik, D-85748 Garching (Germany)
2007-10-15
A radio frequency (RF) driven negative ion source has been designed for the ITER neutral beam injectors, as an alternative to the traditional arc driven solution. The main advantage of this technology is to avoid the presence of the filaments, that require periodic maintenance and consequently frequent shutdowns. The requirements for the ion source of the ITER NBI are to provide a uniform flux of D{sup -}/H{sup -} to the plasma grid of the accelerator that will result in a beam current of 40 A at 1 MeV. The present specification is for a filling pressure of 0.3 Pa. The ion source needs to provide 20/28 mA/cm{sup 2} D{sup -}/H{sup -} current density across the 0.58 m x 1.54 m aperture array for 3600 s. The source, consisting of a main chamber facing the plasma grid, of eight RF drivers and the auxiliary systems for power transfer, cooling and diagnostic purposes, is housed in the same quasi-cylindrical structure that supports the arc driven solution. Specific electric and hydraulic circuits have been designed and verified. In the paper the analyses performed for the design of the components are presented in detail.
Pattern Recognition of High O3 Episodes in Forecasting Daily Maximum Ozone Levels
Jeong-Sook Heo
2004-01-01
Full Text Available In this study, a method was developed to diagnose ozone episodes exceeding environmental criteria (e.g., above 80 ppb on the basis of a multivariate statistical method and a fuzzy expert system. This method, being capable of characterizing the occurrence patterns of high-level ozone, was employed to forecast daily maximum ozone levels. The hourly data for both air pollutants and meteorological parameters, obtained both at the surface and at high elevation (500 hPa stations of Seoul City (1989-1996, were analyzed using this method. Through an application of the fuzzy expert system, the data sets were classified into 8 different types for common ozone episodes. In addition, the data sets were divided into patterns of 11 (Station A, 20 (Station B, 8 (Station C, and 10 (Station D for site-specific ozone episodes. The results of the analysis were successful in demonstrating that the method was sufficiently efficient to classify each class quantitatively with its own patterns of ozone pollution.
Ivashin V.A.
2013-12-01
Full Text Available Aims. The study presents the results of experimental research to verify the principle overlay for maximum permissible levels (MPL of multicolor laser radiation single exposure on eyes. This principle of the independence of the effects of radiation with each wavelength (the imposing principle, was founded and generalized to a wide range of exposure conditions. Experimental verification of this approach in relation to the impact of laser radiation on tissue fundus of an eye, as shows the analysis of the literature was not carried out. Material and methods. Was used in the experimental laser generating radiation with wavelengths: Л1 =0,532 microns, A2=0,556to 0,562 microns and A3=0,619to 0,621 urn. Experiments were carried out on eyes of rabbits with evenly pigmented eye bottom. Results. At comparison of results of processing of the experimental data with the calculated data it is shown that these levels are close by their parameters. Conclusions. For the first time in the Russian Federation had been performed experimental studies on the validity of multi-colored laser radiation on the organ of vision. In view of the objective coincidence of the experimental data with the calculated data, we can conclude that the mathematical formulas work.
Yu, Hwa-Lung; Wang, Chih-Hsin
2013-02-05
Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.
Carletta, Nicholas D.; Mullendore, Gretchen L.; Starzec, Mariusz; Xi, Baike; Feng, Zhe; Dong, Xiquan
2016-08-01
Convective mass transport is the transport of mass from near the surface up to the upper troposphere and lower stratosphere (UTLS) by a deep convective updraft. This transport can alter the chemical makeup and water vapor balance of the UTLS, which affects cloud formation and the radiative properties of the atmosphere. It is therefore important to understand the exact altitudes at which mass is detrained from convection. The purpose of this study was to improve upon previously published methodologies for estimating the level of maximum detrainment (LMD) within convection using data from a single ground-based radar. Four methods were used to identify the LMD and validated against dual-Doppler derived vertical mass divergence fields for six cases with a variety of storm types. The best method for locating the LMD was determined to be the method that used a reflectivity texture technique to determine convective cores and a multi-layer echo identification to determine anvil locations. Although an improvement over previously published methods, the new methodology still produced unreliable results in certain regimes. The methodology worked best when applied to mature updrafts, as the anvil needs time to grow to a detectable size. Thus, radar reflectivity is found to be valuable in estimating the LMD, but storm maturity must also be considered for best results.
Benefit-cost estimation for alternative drinking water maximum contaminant levels
Gurian, Patrick L.; Small, Mitchell J.; Lockwood, John R.; Schervish, Mark J.
2001-08-01
A simulation model for estimating compliance behavior and resulting costs at U.S. Community Water Suppliers is developed and applied to the evaluation of a more stringent maximum contaminant level (MCL) for arsenic. Probability distributions of source water arsenic concentrations are simulated using a statistical model conditioned on system location (state) and source water type (surface water or groundwater). This model is fit to two recent national surveys of source waters, then applied with the model explanatory variables for the population of U.S. Community Water Suppliers. Existing treatment types and arsenic removal efficiencies are also simulated. Utilities with finished water arsenic concentrations above the proposed MCL are assumed to select the least cost option compatible with their existing treatment from among 21 available compliance strategies and processes for meeting the standard. Estimated costs and arsenic exposure reductions at individual suppliers are aggregated to estimate the national compliance cost, arsenic exposure reduction, and resulting bladder cancer risk reduction. Uncertainties in the estimates are characterized based on uncertainties in the occurrence model parameters, existing treatment types, treatment removal efficiencies, costs, and the bladder cancer dose-response function for arsenic.
Janssen PJCM; Speijers GJA; CSR
1997-01-01
This report contains a basic step-to-step description of the procedure followed in the derivation of the human-toxicological Maximum Permissible Risk (MPR ; in Dutch: Maximum Toelaatbaar Risico, MTR) for soil contaminants. In recent years this method has been applied for a large number of compounds
Ion source development for a photoneutralization based NBI system for fusion reactors
Simonin, A.; Esch, H. P. L. de; Garibaldi, P.; Grand, C.; Bechu, S.; Bès, A.; Lacoste, A. [CEA-Cadarache, IRFM, F-13108 St. Paul-lez-Durance (France); LPSC, Grenoble-Alpes University, F-38026 Grenoble France (France)
2015-04-08
The next step after ITER is to demonstrate the viability and generation of electricity by a future fusion reactor (DEMO). The specifications required to operate an NBI system on DEMO are very demanding. The system has to provide a very high level of power and energy, ~100MW of D° beam at 1MeV, including high wall-plug efficiency (η > 60%). For this purpose, a new injector concept, called Siphore, is under investigation between CEA and French universities. Siphore is based on the stripping of the accelerated negative ions by photo-detachment provided by several Fabry-Perot cavities (3.5MW of light power per cavity) implemented along the D{sup −} beam. The beamline is designed to be tall and narrow in order that the photon flux overlaps the entire negative ion beam. The paper will describe the present R and D at CEA which addresses the development of an ion source and pre-accelerator prototypes for Siphore, the main goal being to produce an intense negative ion beam sheet. The negative ion source Cybele is based on a magnetized plasma column where hot electrons are emitted from the source center. Parametric studies of the source are performed using Langmuir probes in order to characterize the plasma and to compare with numerical models being developed in French universities.
Ion source development for a photoneutralization based NBI system for fusion reactors
Simonin, A.; de Esch, H. P. L.; Garibaldi, P.; Grand, C.; Bechu, S.; Bès, A.; Lacoste, A.
2015-04-01
The next step after ITER is to demonstrate the viability and generation of electricity by a future fusion reactor (DEMO). The specifications required to operate an NBI system on DEMO are very demanding. The system has to provide a very high level of power and energy, ~100MW of D° beam at 1MeV, including high wall-plug efficiency (η > 60%). For this purpose, a new injector concept, called Siphore, is under investigation between CEA and French universities. Siphore is based on the stripping of the accelerated negative ions by photo-detachment provided by several Fabry-Perot cavities (3.5MW of light power per cavity) implemented along the D- beam. The beamline is designed to be tall and narrow in order that the photon flux overlaps the entire negative ion beam. The paper will describe the present R&D at CEA which addresses the development of an ion source and pre-accelerator prototypes for Siphore, the main goal being to produce an intense negative ion beam sheet. The negative ion source Cybele is based on a magnetized plasma column where hot electrons are emitted from the source center. Parametric studies of the source are performed using Langmuir probes in order to characterize the plasma and to compare with numerical models being developed in French universities.
Characteristics of edge pedestals in LHW and NBI heated H-mode plasmas on EAST
Zang, Q.; Wang, T.; Liang, Y.; Sun, Y.; Chen, H.; Xiao, S.; Han, X.; Hu, A.; Hsieh, C.; Zhou, H.; Zhao, J.; Zhang, T.; Gong, X.; Hu, L.; Liu, F.; Hu, C.; Gao, X.; Wan, B.; the EAST Team
2016-10-01
By using the recently developed Thomson scattering diagnostic, the pedestal structure of the H-mode with neutral beam injection (NBI) or/and lower hybrid wave (LHW) heating on EAST (Experimental Advanced Superconducting Tokamak) is analyzed in detail. We find that a higher ratio of the power of the NBI to the total power of the NBI and the lower hybrid wave (LHW) will produce a large and regular different edge-localized mode (ELM), and a lower ratio will produce a small and irregular ELM. The experiments show that the mean pedestal width has good correlation with β \\text{p,\\text{ped}}0.5 , The pedestal width appears to be wider than that on other similar machines, which could be due to lithium coating. However, it is difficult to draw any conclusion of correlation between ρ * and the pedestal width for limited ρ * variation and scattered distribution. It is also found that T e/\
Computer-aided colorectal tumor classification in NBI endoscopy using local features.
Tamaki, Toru; Yoshimuta, Junki; Kawakami, Misato; Raytchev, Bisser; Kaneda, Kazufumi; Yoshida, Shigeto; Takemura, Yoshito; Onji, Keiichi; Miyaki, Rie; Tanaka, Shinji
2013-01-01
An early detection of colorectal cancer through colorectal endoscopy is important and widely used in hospitals as a standard medical procedure. During colonoscopy, the lesions of colorectal tumors on the colon surface are visually inspected by a Narrow Band Imaging (NBI) zoom-videoendoscope. By using the visual appearance of colorectal tumors in endoscopic images, histological diagnosis is presumed based on classification schemes for NBI magnification findings. In this paper, we report on the performance of a recognition system for classifying NBI images of colorectal tumors into three types (A, B, and C3) based on the NBI magnification findings. To deal with the problem of computer-aided classification of NBI images, we explore a local feature-based recognition method, bag-of-visual-words (BoW), and provide extensive experiments on a variety of technical aspects. The proposed prototype system, used in the experiments, consists of a bag-of-visual-words representation of local features followed by Support Vector Machine (SVM) classifiers. A number of local features are extracted by using sampling schemes such as Difference-of-Gaussians and grid sampling. In addition, in this paper we propose a new combination of local features and sampling schemes. Extensive experiments with varying the parameters for each component are carried out, for the performance of the system is usually affected by those parameters, e.g. the sampling strategy for the local features, the representation of the local feature histograms, the kernel types of the SVM classifiers, the number of classes to be considered, etc. The recognition results are compared in terms of recognition rates, precision/recall, and F-measure for different numbers of visual words. The proposed system achieves a recognition rate of 96% for 10-fold cross validation on a real dataset of 908 NBI images collected during actual colonoscopy, and 93% for a separate test dataset.
The role of nature-based infrastructure (NBI) in coastal resiliency planning: A literature review.
Saleh, Firas; Weinstein, Michael P
2016-12-01
The use of nature-based infrastructure (NBI) has attracted increasing attention in the context of protection against coastal flooding. This review is focused on NBI approaches to improve coastal resilience in the face of extreme storm events, including hurricanes. We not only consider the role of NBI as a measure to protect people and property but also in the context of other ecological goods and services provided by tidal wetlands including production of fish and shellfish. Although the results of many studies suggest that populated areas protected by coastal marshes were less likely to experience damage when exposed to the full force of storm surge, it was absolutely critical to place the role of coastal wetlands into perspective by noting that while tidal marshes can reduce wave energy from low-to-moderate-energy storms, their capacity to substantially reduce storm surge remains poorly quantified. Moreover, although tidal marshes can reduce storm surge from fast moving storms, very large expanses of habitat are needed to be most effective, and for most urban settings, there is insufficient space to rely on nature-based risk reduction strategies alone. The success of a given NBI method is also context dependent on local conditions, with potentially confounding influences from substrate characteristics, topography, near shore bathymetry, distance from the shore and other physical factors and human drivers such as development patterns. Furthermore, it is important to better understand the strengths and weaknesses of newly developed NBI projects through rigorous evaluations and characterize the local specificities of the particular built and natural environments surrounding these coastal areas. In order for the relevant science to better inform policy, and assist in land-use challenges, scientists must clearly state the likelihood of success in a particular circumstance and set of conditions. We conclude that "caution is advised" before selecting a particular NBI
Missing Data Imputation versus Full Information Maximum Likelihood with Second-Level Dependencies
Larsen, Ross
2011-01-01
Missing data in the presence of upper level dependencies in multilevel models have never been thoroughly examined. Whereas first-level subjects are independent over time, the second-level subjects might exhibit nonzero covariances over time. This study compares 2 missing data techniques in the presence of a second-level dependency: multiple…
Guasp, J.; Fuentes, C.; Liniers, M.
2005-07-01
The density and electron temperature radial profiles, corresponding to the experimental TJ-II campaigns 2003-2004, with NBI, have been fitted to simple functionals in order to allow a fast approximative evaluation for any given density and injected power... The fits have been calculated, separately, for the four possibilities: ECRH and NBI Phases as well as On and Off Axis ECRH injection. The average difference between the experimental profiles for the individual discharges and the fit predictions are around 8% for the density and 10% for the temperature. The behaviour of the predicted profiles with average line density and injected power has been analysed. The central electron temperature decreases monotonically with increasing density and the ECRH phase On Axis central value is clearly higher than the Off axis one. The radial density profiles narrow with increasing density and the NBI On axis case is clearly wider than de Off one. The electron temperature profile widens slightly with increasing density and the width of the On Axix case is lesser than for the Off case in all phases. There exist Fortran subroutines, available at the three CIEMAT computers, allowing the fast approximative evaluation of all these profiles. (Author) 8 refs.
Design of Three-Phase Three-Level CIC T-Source Inverter with Maximum Boost Control
Shults, Tatiana; Husev, Oleksandr; Roncero-Clemente, Carlos
2015-01-01
This paper presents guidelines for component design of the three-level three-phase T-source inverter with continuous input current under maximum boost control proposed recently. Steady state analysis under low-frequency current and voltage ripples in the dc side was made. Component sizes for both...
Li, Xiang; Xu, Yongjian; Yu, Ling; Chen, Yu; Hu, Chundong; Tao, Ling
2016-12-01
Neutral beam injection is recognized as one of the most effective means for plasma heating. According to the research plan of the EAST physics experiment, two sets of neutral beam injector (4-8 MW, 10-100 s) were built and operated in 2014. Neutralization efficiency is one of the important parameters for neutral beam. High neutralization efficiency can not only improve injection power at the same beam energy, but also decrease the power deposited on the heat-load components in the neutral beam injector (NBI). This research explores the power deposition distribution at different neutralization efficiencies on the beamline components of the NBI device. This work has great significance for guiding the operation of EAST-NBI, especially in long pulse and high power operation, which can reduce the risk of thermal damage of the beamline components and extend the working life of the NBI device. supported by the International Science and Technology Cooperation Program of China (No. 2014DFG61950), National Natural Science Foundation of China (No. 11405207) and the Foundation of ASIPP (No. DSJJ-15-GC03)
Kuriyama, Masaaki [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment
1997-02-01
The NBI (Neutral Beam Injection) apparatus used for negative ion at first in the world, has an aim to actually prove heating and electric current drive with high density plasma at the JT-60 and to constitute physical and technical bases for selection and design of heating apparatus of ITER (International Thermal Nuclear Fusion Experimental Reactor). Construction of 500 KeV negative ion NBI apparatus for the JT-60 started to operate on 1993 was completed at March, 1996. On the way, at a preliminary test on forming and acceleration of the negative ion beam using a portion of this apparatus, 400 KeV and 13.5 A/D of the highest deuterium negative ion beam acceleration in the world was obtained successfully, which gave a bright forecasting of the plasma heating and electric current drive experiment using the negative ion NBI apparatus. After March, 1996, some plans to begin beam incident experiment at the JT-60 using the negative ion NBI apparatus and to execute the heating and electric current drive experiment at the JT-60 under intending increase of beam output are progressed. (G.K.)
NBI torque in the presence of magnetic field ripple: experiments and modelling for JET
Salmi, A. T.; Tala, T.; Corrigan, G.; Giroud, C.; Ferreira, J.; Lonnroth, J.; Mantica, P.; Parail, V.; Tsalas, M.; Versloot, T. W.; de Vries, P. C.; Zastrow, K. D.
2011-01-01
Accurate and validated tools for calculating toroidal momentum sources are necessary to make reliable predictions of toroidal rotation for current and future experiments. In this work we present the first experimental validation of torque profile calculation from neutral beam injection (NBI) under t
Extreme value analysis of annual maximum water levels in the Pearl River Delta, China
Qiang ZHANG; Chong-Yu XU; Yongqin David CHEN; Chun-ling LIU
2009-01-01
We analyzed the statistical properties of water level extremes in the Pearl River Delta using five probability distribution functions. Estimation of para-meters was performed using the L-moment technique.Goodness-of-fit was done based on Kolmogorov-Smirnov's statistic D (K-S D). The research results indicate that Wakeby distribution is the best statistical model for description of statistical behaviors of water level extremes in the study region. Statistical analysis indicates that water levels corresponding to different return periods and associated variability tend to be larger in the landward side of the Pearl River Delta and vice versa. A ridge characterized by higher water level can be identified expanding along the West River and the Modaomen channel, showing the impacts of the hydrologic process of the West River basin. Trough and higher grades of water level changes can be detected in the region drained by Xi'nanyong channel, Dongping channel, and mainstream of Pearl River. The Pearl River Delta region is character-ized by low-lying topography and a highly-advanced socio-economy, and is heavily populated, being prone to flood hazards and flood inundation due to rising sea level and typhoons. Therefore, sound and effective counter-measures should be made for human mitigation to natural hazards such as floods and typhoons.
Wilson, Robert M.
1990-01-01
The level of skill in predicting the size of the sunspot cycle is investigated for the two types of precursor techniques, single variate and bivariate fits, both applied to cycle 22. The present level of growth in solar activity is compared to the mean level of growth (cycles 10-21) and to the predictions based on the precursor techniques. It is shown that, for cycle 22, both single variate methods (based on geomagnetic data) and bivariate methods suggest a maximum amplitude smaller than that observed for cycle 19, and possibly for cycle 21. Compared to the mean cycle, cycle 22 is presently behaving as if it were a +2.6 sigma cycle (maximum amplitude of about 225), which means that either it will be the first cycle not to be reliably predicted by the combined precursor techniques or its deviation relative to the mean cycle will substantially decrease over the next 18 months.
A. Sluijs
2011-01-01
Full Text Available A brief (~150 kyr period of widespread global average surface warming marks the transition between the Paleocene and Eocene epochs, ~56 million years ago. This so-called "Paleocene-Eocene thermal maximum" (PETM is associated with the massive injection of ^{13}C-depleted carbon, reflected in a negative carbon isotope excursion (CIE. Biotic responses include a global abundance peak (acme of the subtropical dinoflagellate Apectodinium. Here we identify the PETM in a marine sedimentary sequence deposited on the East Tasman Plateau at Ocean Drilling Program (ODP Site 1172 and show, based on the organic paleothermometer TEX_{86}, that southwest Pacific sea surface temperatures increased from ~26 °C to ~33°C during the PETM. Such temperatures before, during and after the PETM are >10 °C warmer than predicted by paleoclimate model simulations for this latitude. In part, this discrepancy may be explained by potential seasonal biases in the TEX_{86} proxy in polar oceans. Additionally, the data suggest that not only Arctic, but also Antarctic temperatures may be underestimated in simulations of ancient greenhouse climates by current generation fully coupled climate models. An early influx of abundant Apectodinium confirms that environmental change preceded the CIE on a global scale. Organic dinoflagellate cyst assemblages suggest a local decrease in the amount of river run off reaching the core site during the PETM, possibly in concert with eustatic rise. Moreover, the assemblages suggest changes in seasonality of the regional hydrological system and storm activity. Finally, significant variation in dinoflagellate cyst assemblages during the PETM indicates that southwest Pacific climates varied significantly over time scales of 10^{3} – 10^{4} years during this event, a finding comparable to similar studies of PETM successions from the New Jersey Shelf.
Algorithmic analysis of the maximum level length in general-block two-dimensional Markov processes
2006-01-01
Full Text Available Two-dimensional continuous-time Markov chains (CTMCs are useful tools for studying stochastic models such as queueing, inventory, and production systems. Of particular interest in this paper is the distribution of the maximal level visited in a busy period because this descriptor provides an excellent measure of the system congestion. We present an algorithmic analysis for the computation of its distribution which is valid for Markov chains with general-block structure. For a multiserver batch arrival queue with retrials and negative arrivals, we exploit the underlying internal block structure and present numerical examples that reveal some interesting facts of the system.
Castrillon, Julio
2015-11-10
We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on the smoothness of the covariance function. Due to the fast decay of the multi-level covariance matrix coefficients only a small set is computed with a level dependent criterion. We demonstrate our approach on problems of up to 512,000 observations with a Matérn covariance function and highly irregular placements of the observations. In addition, these problems are numerically unstable and hard to solve with traditional methods.
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M; Walker, La Tonya Nicole; Roberts, Barry L; Malczynski, Leonard A.
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plus two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconom ic impacts. The full report is contained in 27 volumes.
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M; Walker, La Tonya Nicole; Roberts, Barry L; Malczynski, Leonard A.
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plus two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.
Chalise, Santosh
Although solar photovoltaic (PV) systems have remained the fastest growing renewable power generating technology, variability as well as uncertainty in the output of PV plants is a significant issue. This rapid increase in PV grid-connected generation presents not only progress in clean energy but also challenges in integration with traditional electric power grids which were designed for transmission and distribution of power from central stations. Unlike conventional electric generators, PV panels do not have rotating parts and thus have no inertia. This potentially causes a problem when the solar irradiance incident upon a PV plant changes suddenly, for example, when scattered clouds pass quickly overhead. The output power of the PV plant may fluctuate nearly as rapidly as the incident irradiance. These rapid power output fluctuations may then cause voltage fluctuations, frequency fluctuations, and power quality issues. These power quality issues are more severe with increasing PV plant power output. This limits the maximum power output allowed from interconnected PV plants. Voltage regulation of a distribution system, a focus of this research, is a prime limiting factor in PV penetration levels. The IEEE 13-node test feeder, modeled and tested in the MATLAB/Simulink environment, was used as an example distribution feeder to analyze the maximum acceptable penetration of a PV plant. The effect of the PV plant's location was investigated, along with the addition of a VAR compensating device (a D-STATCOM in this case). The results were used to develop simple guidelines for determining an initial estimate of the maximum PV penetration level on a distribution feeder. For example, when no compensating devices are added to the system, a higher level of PV penetration is generally achieved by installing the PV plant close to the substation. The opposite is true when a VAR compensator is installed with the PV plant. In these cases, PV penetration levels over 50% may be
Progressing state of design and R and D of NBI for ITER
Ohara, Yoshihiro [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment
1997-02-01
In the International Thermal Nuclear Fusion Experimental Reactor (ITER), Neutral Beam Injection (NBI) apparatus is thought to be a powerful means of electric current drive for heating and stabilizing of plasma, and of controlling the plasma stably. Then, design and development of 1 MeV class negative ion NBI apparatus with compactness and better consistency with reactor have been conducted. On its engineering design, numbers of ports for NBI apparatus were changed form 3 to 4 on a stage of intermediate design completion on June, 1995, and then incident power per unit port was increased from 12.5 MW to 16.7 MW, which brought severer characteristics required for negative ion source. At present, designs of beam deflector, magnetic shield, neutron shielding, remote maintenance and so forth as well as negative ion source and accelerator have been progressed. On its engineering R and D, for development of negative ion source, both deuterium negative ion current and its density established about 1/3 of the characteristics in ITER actural apparatus at an aimed operational gas pressure. And, for development of negative ion accerelator, over 80% of the negative ion acceleration energy which corresponds to an aim of ITER could be established. (G.K.)
Schmid, Gernot; Kuster, Niels
2015-02-01
The objective of this paper is to compare realistic maximum electromagnetic exposure of human tissues generated by mobile phones with electromagnetic exposures applied during in vitro experiments to assess potentially adverse effects of electromagnetic exposure in the radiofrequency range. We reviewed 80 in vitro studies published between 2002 and present that concern possible adverse effects of exposure to mobile phones operating in the 900 and 1800 MHz bands. We found that the highest exposure level averaged over the cell medium that includes evaluated cells (monolayer or suspension) used in 51 of the 80 studies corresponds to 2 W/kg or less, a level below the limit defined for the general public. That does not take into account any exposure non-uniformity. For comparison, we estimated, by numerical means using dipoles and a commercial mobile phone model, the maximum conservative exposure of superficial tissues from sources operated in the 900 and 1800 MHz bands. The analysis demonstrated that exposure of skin, blood, and muscle tissues may well exceed 40 W/kg at the cell level. Consequently, in vitro studies reporting minimal or no effects in response to maximum exposure of 2 W/kg or less averaged over the cell media, which includes the cells, may be of only limited value for analyzing risk from realistic mobile phone exposure. We, therefore, recommend future in vitro experiments use specific absorption rate levels that reflect maximum exposures and that additional temperature control groups be included to account for sample heating.
Nimmrichter, P.; McClintock, J.; Peng, J. [AMEC plc., Toronto, ON (Canada); Leung, H. [Nuclear Waste Management Organization, Toronto, ON (Canada)
2011-07-01
Ontario Power Generation (OPG) has entered a process to seek Environmental Assessment and licensing approvals to construct a Deep Geologic Repository (DGR) for Low and Intermediate Level Radioactive Waste (L&ILW) near the existing Western Waste Management Facility (WWMF) at the Bruce nuclear site in the Municipality of Kincardine, Ontario. In support of the design of the proposed DGR project, maximum flood stages were estimated for potential flood hazard risks associated with coastal, riverine and direct precipitation flooding. The estimation of lake/coastal flooding for the Bruce nuclear site considered potential extreme water levels in Lake Huron, storm surge and seiche, wind waves, and tsunamis. The riverine flood hazard assessment considered the Probable Maximum Flood (PMF) within the local watersheds, and within local drainage areas that will be directly impacted by the site development. A series of hydraulic models were developed, based on DGR project site grading and ditching, to assess the impact of a Probable Maximum Precipitation (PMP) occurring directly at the DGR site. Overall, this flood assessment concluded there is no potential for lake or riverine based flooding and the DGR area is not affected by tsunamis. However, it was also concluded from the results of this analysis that the PMF in proximity to the critical DGR operational areas and infrastructure would be higher than the proposed elevation of the entrance to the underground works. This paper provides an overview of the assessment of potential flood hazard risks associated with coastal, riverine and direct precipitation flooding that was completed for the DGR development. (author)
Aasvang, Gunn Marit; Moum, Torbjorn; Engdahl, Bo
2008-07-01
The objective of the present survey was to study self-reported sleep disturbances due to railway noise with respect to nighttime equivalent noise level (L(p,A,eq,night)) and maximum noise level (L(p,A,max)). A sample of 1349 people in and around Oslo in Norway exposed to railway noise was studied in a cross-sectional survey to obtain data on sleep disturbances, sleep problems due to noise, and personal characteristics including noise sensitivity. Individual noise exposure levels were determined outside of the bedroom facade, the most-exposed facade, and inside the respondents' bedrooms. The exposure-response relationships were analyzed by using logistic regression models, controlling for possible modifying factors including the number of noise events (train pass-by frequency). L(p,A,eq,night) and L(p,A,max) were significantly correlated, and the proportion of reported noise-induced sleep problems increased as both L(p,A,eq,night) and L(p,A,max) increased. Noise sensitivity, type of bedroom window, and pass-by frequency were significant factors affecting noise-induced sleep disturbances, in addition to the noise exposure level. Because about half of the study population did not use a bedroom at the most-exposed side of the house, the exposure-response curve obtained by using noise levels for the most-exposed facade underestimated noise-induced sleep disturbance for those who actually have their bedroom at the most-exposed facade.
Kupczewska-Dobecka, Małgorzata; Soćko, Renata; Czerczak, Sławomir
2006-01-01
The aim of this work is to analyse Maximum Admissible Concentration (MAC) values proposed for irritants by the Group of Experts for Chemical Agents in Poland, based on the RD50 value. In 1994-2004, MAC values for irritants based on the RD50 value were set for 17 chemicals. For the purpose of the analysis, 1/10 RD50, 1/100 RD50 and the MAC/RD50 ratio were calculated. The determined MAC values are within the 0.01-0.09 RD50 range. The RD50 value is a good rough criterion to set MAC values for irritants and it makes it possible to estimate quickly admissible exposure levels. It has become clear that, in some cases, simple setting the MAC value for an irritant at the level of 0.03 RD50 may be insufficient to determine precisely the possible hazard to workers' health. Other available toxicological data, such as NOAEL (No-Observed-Adverse-Effect Level) and LOAEL (Lowest-Observed-Adverse-Effect Level), should always be considered as well.
Development of a high-current hydrogen-negative ion source for LHD-NBI system
Takeiri, Yasuhiko; Osakabe, Masaki; Tsumori, Katsuyoshi; Oka, Yoshihide; Kaneko, Osamu; Asano, Eiji; Kawamoto, Toshikazu; Akiyama, Ryuichi [National Inst. for Fusion Science, Toki, Gifu (Japan); Tanaka, Masanobu
1998-08-01
We have developed a high-current hydrogen-negative ion source for a negative-ion-based NBI system in Large Helical Device (LHD). The ion source is a cesium-seeded volume-production source equipped with an external magnetic filter. An arc chamber is rectangular, the dimensions of which are 35 cm x 145 cm in cross section and 21 cm in depth. A three-grid single-stage accelerator is divided into five sections longitudinally, each of which has 154(14 x 11) apertures in an area of 25 cm x 25 cm. The ion source was tested in the negative-NBI teststand, and 25 A of the negative ion beam is incident on a beamdump 13 m downstream with an energy of 104 keV for 1 sec. Multibeamlets of 770 are focused on a focal point 13 m downstream with an averaged divergence angle of 10 mrad by the geometrical arrangement of five sections of grid and the aperture displacement technique of the grounded grid. A uniform beam in the vertical direction over 125 cm is obtained with uniform plasma production in the arc chamber by balancing individual arc currents flowing through each filament. Long-pulse beam production was performed, and 1.3 MW of the negative ion beam is incident on the beamdump for 10 sec, and the temperature rise of the cooling water is almost saturated for the extraction and the grounded grids. These results satisfy the first-step specification of the LHD-NBI system. (author)
Design of a Large Scale Titanium Getter Pump for the HL-2A NBI
JIANGTao; JIANGShaofeng; LEIGuangjiu
2003-01-01
The HL-2A NBI design parameters are hydrogen beam energy 60 keY, the pulse duration of 2 s, consist of two beam lines and injected along tangential direction. The neutral beam power will arrive 4 MW in the future. The first step is one beam line and two ion sources. The vacuum vessel consists of major chamber, minor chamber and ion source chamber and drift duct. We need a vacuum system, which capacity is sufficient and the relatively low costs of it.
Electrical and thermal analyses for the radio-frequency circuit of ITER NBI ion source
Zamengo, A. [Consorzio RFX, EURATOM-ENEA Association, Corso Stati Uniti, 4, 35127 Padova (Italy)], E-mail: andrea.zamengo@igi.cnr.it; Recchia, M. [Consorzio RFX, EURATOM-ENEA Association, Corso Stati Uniti, 4, 35127 Padova (Italy); Department of Electrical Engineering, University of Padua, Via Gradenigo 6/A, 35131 Padova (Italy); Kraus, W. [Max-Planck-Institut fuer Plasmaphysik, EURATOM Association, Boltzmannstr. 2, D-85748 Garching (Germany); Bigi, M. [Consorzio RFX, EURATOM-ENEA Association, Corso Stati Uniti, 4, 35127 Padova (Italy); Martens, C. [Max-Planck-Institut fuer Plasmaphysik, EURATOM Association, Boltzmannstr. 2, D-85748 Garching (Germany); Toigo, V. [Consorzio RFX, EURATOM-ENEA Association, Corso Stati Uniti, 4, 35127 Padova (Italy)
2009-06-15
This paper covers specific electrical and thermal aspects of the radio-frequency (RF) circuit which supplies the ion source of the International Thermonuclear Experimental Reactor (ITER) Neutral Beam Injector (NBI). Firstly, a matching circuit for the RF Antennas is presented and a possible solution for the matching components discussed, in relation to the anticipated equivalent circuit parameters of the RF driven plasma. Secondly, the thermal behaviour of the RF transmission line is analyzed, utilising finite element tools, to evaluate the RF line overtemperature under the heaviest foreseen operating conditions.
Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J
2016-03-01
Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an "internal" study while utilizing summary-level information, such as information on parameters for reduced models, from an "external" big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature.
Monopoles in NBI-Higgs theory and Born-Infeld collapse
Dyadichev, V V
2002-01-01
Regular magnetic monopoles in the non-Abelian Born-Infeld-Higgs theory are known to exist in the region of the field strength parameter $\\beta>\\beta_{{\\rm cr}}$, bounded from below. Beyond this region, only pointlike (embedded abelian) monopoles exist, and we show that the transition from the regular to singular structure is reminiscent of gravitational collapse. Near the threshold behavior is characterized by the rapidly increasing negative pressure, which typically arises in the high density NBI matter. Another feature, shared both the NBI and gravitating monopoles, is the existence of excited states, which can be thought of as bound states of monopoles and sphalerons. These are labeled by the number $N$ of nodes of the Yang-Mills function. Their masses are greater than the mass of the ground state monopole, and they are expected to be unstable. The sequence of masses $M_N$ rapidly converges to the mass of the embedded Abelian solution with constant Higgs. The ratio of the sphaleron size to that of the mono...
Progress of the ITER NBI acceleration grid power supply reference design
Toigo, Vanni [Consorzio RFX, Associazione EURATOM-ENEA sulla Fusione, Corso Stati Uniti 4, I-35127 Padova (Italy); Zanotto, Loris, E-mail: loris.zanotto@igi.cnr.it [Consorzio RFX, Associazione EURATOM-ENEA sulla Fusione, Corso Stati Uniti 4, I-35127 Padova (Italy); Bigi, Marco [Consorzio RFX, Associazione EURATOM-ENEA sulla Fusione, Corso Stati Uniti 4, I-35127 Padova (Italy); Decamps, Hans [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Ferro, Alberto; Gaio, Elena [Consorzio RFX, Associazione EURATOM-ENEA sulla Fusione, Corso Stati Uniti 4, I-35127 Padova (Italy); Gutiérrez, Daniel [Fusion For Energy, C/Josep Pla 2, 08019 Barcelona (Spain); Tsuchida, Kazuki; Watanabe, Kazuhiro [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki-ken 311-0193 (Japan)
2013-10-15
Highlights: ► This paper reports the progress in the reference design of the Acceleration Grid Power Supply (AGPS) of the ITER Neutral Beam Injector (NBI) ► A critical revision of the main design choices is presented in light of the definition of some key interface parameters between the two AGPS subsystems. ► The verification of the fulfillment of the requirements in any operational conditions is reported and discussed. -- Abstract: This paper reports the progress in the reference design of the Acceleration Grid Power Supply (AGPS) of the ITER Neutral Beam Injector (NBI). The design of the AGPS is very challenging, as it shall be rated to provide about 55 MW at 1 MV dc in quasi steady-state conditions; moreover, the procurement of the system is shared between the European Domestic Agency (F4E) and the Japanese Domestic Agency (JADA), resulting in additional design complication due to the need of a common definition of the interface parameters. A critical revision of the main design choices is presented also in light of the definition of some key interface parameters between the two AGPS subsystems. Moreover, the verification of the fulfillment of the requirements in any operational conditions taking into account the tolerance of the different parameters is also reported and discussed.
Origin of giant piezoelectric effect in lead-free K1-xNaxTa1-yNbyO3 single crystals.
Tian, Hao; Meng, Xiangda; Hu, Chengpeng; Tan, Peng; Cao, Xilong; Shi, Guang; Zhou, Zhongxiang; Zhang, Rui
2016-05-10
A series of high-quality, large-sized (maximum size of 16 × 16 × 32 mm(3)) K1-xNaxTa1-yNbyO3 (x = 0.61, 0.64, and 0.70 and corresponding y = 0.58, 0.60, and 0.63) single crystals were grown using the top-seed solution growth method. The segregation of the crystals, which allowed for precise control of the individual components of the crystals during growth, was investigated. The obtained crystals exhibited excellent properties without being annealed, including a low dielectric loss (0.006), a saturated hysteresis loop, a giant piezoelectric coefficient d33 (d33 = 416 pC/N, determined by the resonance method and d33(*) = 480 pC/N, measured using a piezo-d33 meter), and a large electromechanical coupling factor, k33 (k33 = 83.6%), which was comparable to that of lead zirconate titanate. The reason the piezoelectric coefficient d33 of K0.39Na0.61Ta0.42Nb0.58O3 was larger than those of the other two crystals grown was elucidated through first-principles calculations. The obtained results indicated that K1-xNaxTa1-yNbyO3 crystals can be used as a high-quality, lead-free piezoelectric material.
Origin of giant piezoelectric effect in lead-free K1-xNaxTa1-yNbyO3 single crystals
Tian, Hao; Meng, Xiangda; Hu, Chengpeng; Tan, Peng; Cao, Xilong; Shi, Guang; Zhou, Zhongxiang; Zhang, Rui
2016-05-01
A series of high-quality, large-sized (maximum size of 16 × 16 × 32 mm3) K1-xNaxTa1-yNbyO3 (x = 0.61, 0.64, and 0.70 and corresponding y = 0.58, 0.60, and 0.63) single crystals were grown using the top-seed solution growth method. The segregation of the crystals, which allowed for precise control of the individual components of the crystals during growth, was investigated. The obtained crystals exhibited excellent properties without being annealed, including a low dielectric loss (0.006), a saturated hysteresis loop, a giant piezoelectric coefficient d33 (d33 = 416 pC/N, determined by the resonance method and d33* = 480 pC/N, measured using a piezo-d33 meter), and a large electromechanical coupling factor, k33 (k33 = 83.6%), which was comparable to that of lead zirconate titanate. The reason the piezoelectric coefficient d33 of K0.39Na0.61Ta0.42Nb0.58O3 was larger than those of the other two crystals grown was elucidated through first-principles calculations. The obtained results indicated that K1-xNaxTa1-yNbyO3 crystals can be used as a high-quality, lead-free piezoelectric material.
Upgrades and application of FIT3D NBI-plasma interaction code in view of LHD deuterium campaigns
Vincenzi, P.; Bolzonella, T.; Murakami, S.; Osakabe, M.; Seki, R.; Yokoyama, M.
2016-12-01
This work presents an upgrade of the FIT3D neutral beam-plasma interaction code, part of TASK3D, a transport suite of codes, and its application to LHD experiments in the framework of the preparation for the first deuterium experiments in the LHD. The neutral beam injector (NBI) system will be upgraded to D injection, and efforts have been recently made to extend LHD modelling capabilities to D operations. The implemented upgrades for FIT3D to enable D NBI modelling in D plasmas are presented, with a discussion and benchmark of the models used. In particular, the beam ionization module has been modified and a routine for neutron production estimation has been implemented. The upgraded code is then used to evaluate the NBI power deposition in experiments with different plasma compositions. In the recent LHD campaign, in fact, He experiments have been run to help the prediction of main effects which may be relevant in future LHD D plasmas. Identical H/He experiments showed similar electron density and temperature profiles, while a higher ion temperature with an He majority has been observed. From first applications of the upgraded FIT3D code it turns out that, although more NB power appears to be coupled with the He plasma, the NBI power deposition is unaffected, suggesting that heat deposition does not play a key role in the increased ion temperature with He plasma.
Laser photodetachment diagnostics of a 1/3-size negative hydrogen ion source for NBI
Geng, S., E-mail: geng.shaofei@nifs.ac.jp [The Graduate University for Advanced Studies, Oroshi, Toki, Gifu 509-5292 (Japan); Tsumori, K.; Nakano, H.; Kisaki, M.; Ikeda, K.; Takeiri, Y.; Osakabe, M.; Nagaoka, K.; Kaneko, O. [National Institutes for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 Japan (Japan)
2015-04-08
To investigate the flows of charged particles in front of the plasma grid (PG) in a negative hydrogen ion source, the information of the local densities of electrons and negative hydrogen ions (H-) are necessary. For this purpose, the laser photodetachment is applied for pure hydrogen plasmas and Cs-seeded plasma in a 1/3-size negative hydrogen ion source in NIFS-NBI test stand. The H- density obtained by photodetachment is calibrated by the results from cavity ring-down (CRD). The pressure dependence and PG bias dependence of the local H- density are presented and discussed. The results show that H- density increases significantly by seeding Cs into the plasma. In Cs-seeded plasma, relativity exists between the H- ion density and plasma potential.
Balsalobre-Fernández, Carlos; Tejero-González, Carlos M; Del Campo-Vecino, Juan; Alonso-Curiel, Dionisio
2013-03-01
The aim of this study was to determine the effects of a power training cycle on maximum strength, maximum power, vertical jump height and acceleration in seven high-level 400-meter hurdlers subjected to a specific training program twice a week for 10 weeks. Each training session consisted of five sets of eight jump-squats with the load at which each athlete produced his maximum power. The repetition maximum in the half squat position (RM), maximum power in the jump-squat (W), a squat jump (SJ), countermovement jump (CSJ), and a 30-meter sprint from a standing position were measured before and after the training program using an accelerometer, an infra-red platform and photo-cells. The results indicated the following statistically significant improvements: a 7.9% increase in RM (Z=-2.03, p=0.021, δc=0.39), a 2.3% improvement in SJ (Z=-1.69, p=0.045, δc=0.29), a 1.43% decrease in the 30-meter sprint (Z=-1.70, p=0.044, δc=0.12), and, where maximum power was produced, a change in the RM percentage from 56 to 62% (Z=-1.75, p=0.039, δc=0.54). As such, it can be concluded that strength training with a maximum power load is an effective means of increasing strength and acceleration in high-level hurdlers.
Maximum power point tracking for photovoltaic applications by using two-level DC/DC boost converter
Moamaei, Parvin
Recently, photovoltaic (PV) generation is becoming increasingly popular in industrial applications. As a renewable and alternative source of energy they feature superior characteristics such as being clean and silent along with less maintenance problems compared to other sources of the energy. In PV generation, employing a Maximum Power Point Tracking (MPPT) method is essential to obtain the maximum available solar energy. Among several proposed MPPT techniques, the Perturbation and Observation (P&O;) and Model Predictive Control (MPC) methods are adopted in this work. The components of the MPPT control system which are P&O; and MPC algorithms, PV module and high gain DC-DC boost converter are simulated in MATLAB Simulink. They are evaluated theoretically under rapidly and slowly changing of solar irradiation and temperature and their performance is shown by the simulation results, finally a comprehensive comparison is presented.
Ishida, Hiroshi; Hirose, Ryohei; Watanabe, Susumu
2012-10-01
The abdominal drawing-in maneuver (ADIM) is commonly used as a fundamental component of lumbar stabilization training programs. One potential limitation of lumbar stabilization programs is that it can be difficult and time consuming to train people to perform the ADIM. The transverse abdominis (TrA), internal oblique (IO), and external oblique (EO) muscles are the most powerful muscles involved in expiration. However, little is known about the differences in the recruitment of the abdominal muscles between the ADIM and breathe held at maximum expiratory level (maximum expiration). The thickness of the TrA and IO muscles was measured by ultrasound imaging, and the activity of the EO muscle was measured by electromyography (EMG) in 33 healthy male performing the ADIM and maximum expiration. Maximum expiration produced a significant increase in the thickness of the TrA and IO muscles compared to the ADIM (p muscle was significantly higher during maximum expiration than during the ADIM (p muscle was approximately 30% of the maximal voluntary contraction during maximum expiration. Thus, maximum expiration may be an effective method for training of co-activation of the lateral abdominal muscles.
Nouws, J.F.M.; Egmond, van H.; Loeffen, G.; Schouten, J.; Keukens, H.; Smulders, I.; Stegeman, H.
1999-01-01
In this paper we assessed the suitability of the Charm HVS and a newly developed microbiological multiplate system as post-screening tests to confirm the presence of residues in raw milk at or near the maximum permissible residue level (MRL). The multiplate system is composed of Bacillus
Nouws, J.F.M.; Egmond, van H.; Loeffen, G.; Schouten, J.; Keukens, H.; Smulders, I.; Stegeman, H.
1999-01-01
In this paper we assessed the suitability of the Charm HVS and a newly developed microbiological multiplate system as post-screening tests to confirm the presence of residues in raw milk at or near the maximum permissible residue level (MRL). The multiplate system is composed of Bacillus stearotherm
Werner, Stefanie [Umweltbundesamt, Dessau-Rosslau (Germany). Fachgebiet II 2.3
2011-05-15
When offshore wind farms are constructed, every single pile is hammered into the sediment by a hydraulic hammer. Noise levels at Horns Reef wind farm were in the range of 235 dB. The noise may cause damage to the auditory system of marine mammals. The Federal Environmental Office therefore recommends the definition of maximum permissible noise levels. Further, care should be taken that no marine mammals are found in the immediate vicinity of the construction site. (AKB)
Regina A. A.; I. Mohammad Halim Shah
2010-01-01
The study is to model emission from a stack to estimate ground level concentration from a palm oil mill. The case study is a mill located in Kuala Langat, Selangor. Emission source is from boilers stacks. The exercise determines the estimate the ground level concentrations for dust to the surrounding areas through the utilization of modelling software. The surround area is relatively flat, an industrial area surrounded by factories and with palm oil plantations in the outskirts. The model uti...
Research on network maximum flows algorithm of cascade level graph%级连层次图的网络最大流算法研究
潘荷新; 伊崇信; 李满
2011-01-01
给出一种通过构造网络级连层次图的方法,来间接求出最大网络流的算法.对于给定的有n个顶点,P条边的网络N=(G,s,t,C),该算法可在O(n2)时间内快速求出流经网络N的最大网络流及达最大流时的网络流.%This paper gives an algoritm that structures a network cascade level graph to find out maximum flow of the network indirectly.For the given network N=(G,s,t,C) that has n vetexes and e arcs,this algorithm finds out the maximum value of the network flow fast in O(n2) time that flows from the network N and the network flows when the value of the one reach maximum.
Compensations of beamlet deflections for 1 MeV accelerator of ITER NBI
Kashiwagi, Mieko; Taniguchi, Masaki; Umeda, Naotaka; Dairaku, Masayuki; Tobari, Hiroyuki; Yamanaka, Haruhiko; Watanabe, Kazuhiro; Inoue, Takashi; de Esch, H. P. L.; Grisham, Larry R.; Boilson, Deirdre; Hemsworth, Ronald S.; Tanaka, Masanobu
2013-02-01
Compensation methods of beamlet deflections have been studied in a three dimensional (3D) beam analysis using OPERA-3d code for 1 MeV accelerator of the ITER neutral beam injector (NBI). The beamlet deflection is caused by i) magnetic field generated by permanent magnets embedded in the extraction grid (EXG) for electron suppression and ii) space charge repulsion between the beamlets and beam groups. Moreover, the beamlet deflection is caused due to electric field distortion formed by a grid support structure. In order to compensate the beamlet deflections due to i) and ii), an aperture offset of 0.6 mm was applied in the electron suppression grid (ESG) and a metal bar with 3 mm in thickness, so-called a kerb, was attached around the aperture area at the back side of the ESG, respectively. Detailed configuration of the compensation methods was also considered so as to suppress the beam spread due to the electric field distortion and to lower electric field concentrations at the edge of the kerb. For the beamlets near the grid support structure, the beamlet deflection due to the space charge repulsion could be negated due to the electric field distortion formed by the grid support structure.
Global anomalous transport of ICRH- and NBI-heated fast ions
Wilkie, G. J.; Pusztai, I.; Abel, I.; Dorland, W.; Fülöp, T.
2017-04-01
By taking advantage of the trace approximation, one can gain an enormous computational advantage when solving for the global turbulent transport of impurities. In particular, this makes feasible the study of non-Maxwellian transport coupled in radius and energy, allowing collisions and transport to be accounted for on similar time scales, as occurs for fast ions. In this work, we study the fully-nonlinear ITG-driven trace turbulent transport of locally heated and injected fast ions. Previous results indicated the existence of MeV-range minorities heated by cyclotron resonance, and an associated density pinch effect. Here, we build upon this result using the t3core code to solve for the distribution of these minorities, consistently including the effects of collisions, gyrokinetic turbulence, and heating. Using the same tool to study the transport of injected fast ions, we contrast the qualitative features of their transport with that of the heated minorities. Our results indicate that heated minorities are more strongly affected by microturbulence than injected fast ions. The physical interpretation of this difference provides a possible explanation for the observed synergy when neutral beam injection (NBI) heating is combined with ion cyclotron resonance heating (ICRH). Furthermore, we move beyond the trace approximation to develop a model which allows one to easily account for the reduction of anomalous transport due to the presence of fast ions in electrostatic turbulence.
Analysis of active and passive magnetic field reduction systems (MFRS) of the ITER NBI
Roccella, M. [L.T. Calcoli S.a.S., Piazza Prinetti 26/B, Merate (Lecco) (Italy)], E-mail: roccella@ltcalcoli.it; Lucca, F.; Roccella, R. [L.T. Calcoli S.a.S., Piazza Prinetti 26/B, Merate (Lecco) (Italy); Pizzuto, A.; Ramogida, G. [Associazione EURATOM sulla Fusione - ENEA Frascati (Italy); Portone, A.; Tanga, A. [ITER EFDA (Italy); Formisano, A.; Martone, R. [CREATE Napoli (Italy)
2007-10-15
In ITER two heating (HNBI) and one diagnostic neutral beam injectors (DNBI) are foreseen. Inside these components there are very stringent limits on the magnetic field (the flux density must be below some G along the ion path and below 20 G in the neutralizing regions). To achieve these performances in an environment with high stray field due to the plasma and the poloidal field coils (PFC), both passive and active shielding systems have been foreseen. The present design of the magnetic field reduction systems (MFRS) is made of seven active coils and of a box surrounding the NBI region, consisting of ferromagnetic plates. The electromagnetic analyses of the effectiveness of these shields have been performed by a 3D FEM model using ANSYS code for the HNBI. The ANSYS models of the ferromagnetic box and of the active coils are fully parametric, thus any size change of the ferromagnetic box and coils (linear dimension or thickness) preserving the overall box shape could be easily reproduced by simply changing some parameter in the model.
Design issues of the High Voltage platform and feedthrough for the ITER NBI Ion Source
Boldrin, M. [Consorzio RFX, Associazione EURATOM-ENEA sulla Fusione, Corso Stati Uniti 4, I-35127 Padova (Italy)], E-mail: marco.boldrin@igi.cnr.it; Palma, M. Dalla; Milani, F. [Consorzio RFX, Associazione EURATOM-ENEA sulla Fusione, Corso Stati Uniti 4, I-35127 Padova (Italy)
2009-06-15
In the ITER heating Neutral Beam Injector (NBI), a High Voltage air-insulated platform (named High Voltage Deck, HVD) will be installed to host the Ion Source and Extractor Power supply system and associated diagnostics referred to -1 MV DC potential. All power and control cables are routed from the HVD via a feedthrough (HV bushing) to the gas insulated transmission line which feeds the Injector. The paper focuses on insulation and mechanical issues for both HVD and HV bushing which are very special components, far from the present industrial standards as far as voltage (-1 MV DC) and dimensions are concerned. For this purpose, a preliminary design of the HVD has been carried out as concerns the mechanical structure and external shield. Then, the structure has been verified with a seismic analysis applying the seismic load excitation specified for the ITER construction site (Cadarache) and carrying out verifications according to relevant international standards. As regards the HV bushing design, proposals for the complex inner conductor structure and for interfaces to the HVD and transmission line are outlined; alternative installation layouts (aside or underneath the HVD) are compared from both mechanical and electrical points of view.
Chirkov, A. Yu.
2015-09-01
Low gain (Q ~ 1) fusion plasma systems are of interest for concepts of fusion-fission hybrid reactors. Operational regimes of large modern tokamaks are close to Q ≈ 1. Therefore, they can be considered as prototypes of neutron sources for fusion-fission hybrids. Powerful neutral beam injection (NBI) can support the essential population of fast particles compared with the Maxwellial population. In such two-component plasma, fusion reaction rate is higher than for Maxwellian plasma. Increased reaction rate allows the development of relatively small-size and relatively inexpensive neutron sources. Possible operating regimes of the NBI-heated tokamak neutron source are discussed. In a relatively compact device, the predictions of physics of two-component fusion plasma have some volatility that causes taking into account variations of the operational parameters. Consequent parameter ranges are studied. The feasibility of regimes with Q ≈ 1 is shown for the relatively small and low-power system. The effect of NBI fraction in total heating power is analyzed.
Vescovi, Jason D
2014-07-01
The aim of this study was to examine the impact of maximum sprint speed on peak and mean sprint speed during youth female field hockey matches. Two high-level female field hockey teams (U-17, n = 24, and U-21, n = 20) were monitored during a 4-game international test series using global position system technology and tested for maximum sprint speed. Dependent variables were compared using a 3-factor ANOVA (age group, position, and speed classification); effect sizes (Cohen d) and confidence limits were also calculated. Maximum sprint speed was similar between age groups and positions, with faster players having greater speed than slower players (29.3 ± 0.4 vs 27.2 ± 1.1 km/h). Overall, peak match speed in youth female field hockey players reaches approximately 90% of maximum sprint speed. Absolute peak match speed and mean sprint speed during matches were similar among the age groups (except match 1) and positions (except match 2); however, peak match speed was greater for faster players in matches 3 and 4. No differences were observed in the relative proportion for mean sprint speeds for age groups or positions, but slower players consistently displayed similar relative mean sprint speeds by using a greater proportion of their maximum sprint speed.
Regina A. A.
2010-12-01
Full Text Available The study is to model emission from a stack to estimate ground level concentration from a palm oil mill. The case study is a mill located in Kuala Langat, Selangor. Emission source is from boilers stacks. The exercise determines the estimate the ground level concentrations for dust to the surrounding areas through the utilization of modelling software. The surround area is relatively flat, an industrial area surrounded by factories and with palm oil plantations in the outskirts. The model utilized in the study was to gauge the worst-case scenario. Ambient air concentrations were garnered calculate the increase to localized conditions. Keywords: emission, modelling, palm oil mill, particulate, POME
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Tran Duy, A.; Smit, B.; Dam, van A.A.; Schrama, J.W.
2008-01-01
The aim of this study was to gain insight into how Nile tilapia (Oreochromis niloticus) regulate feed and energy intake in response to diets low and high in starch and cellulose. It was hypothesized that high-starch diets would reduce feed intake due to the effect of high blood glucose level, and th
Thompson, William L.; Lee, Danny C.
2000-11-01
Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.
EMERSON DE OLIVEIRA GHERI
2000-09-01
Full Text Available Foram avaliados os efeitos da aplicação de fósforo (P em solo argiloso, de textura média e arenoso, sobre a produção de matéria seca de Panicum maximum Jacq. cv. Tanzânia. O ensaio foi conduzido em casa de vegetação, em vasos de plástico contendo 10 dm³ de solo, em esquema fatorial e delineamento inteiramente ao acaso. Após calagem para V = 70% e aplicação de 0, 35, 70, 105 e 140 mg/dm³ de P, o solo foi umedecido, e depois de 30 dias, secado e amostrado. O ensaio foi conduzido por 76 dias, a partir da emergência das plântulas, com o primeiro corte aos 48 dias, a 10 cm do solo, e o segundo, aos 76,rente ao solo. Com a aplicação de P houve aumento de produção de matéria seca, e o maior acréscimo ocorreu com a aplicação de 35 mg/dm³. A maior produção foi obtida no solo de textura média. O teor de P nas plantas estava adequado nos solos arenoso e argiloso. No de textura média, ele diminuiu com o aumento da produção,caracterizando efeito de diluição. Com aprodução relativa e o teor de P de cada solo, foi determinado o nível crítico de 38 mg/dm³ de P extraído por resina.A greenhouse experiment was carried out to evaluate the effects of phosphorus (P application on dry matter production of Panicum maximum Jacq. cv. Tanzânia. The experimental design was completely randomized, in outline complete factorial combining three soils with different textures (sandy, middle, clay and five P levels: 0, 35, 70, 105 and 140 mg/dm³. After liming to elevate base saturation degree to 70% and P application, the soils were moistened and after 30 days they were dried and sampled. Plastic pots with 10 dm³ soil were used and the grass grew for 76 days. In this period two cuts were made: the first one, at 10 cm above soil, 48 days after emergency, and the second, 76 days after the first one, at the soil surface. The dry matter production increased with P application for the three soils and the maximum increase was observed with 35
European Food Safety Authority
2013-05-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance ethoxyquin. Although this active substance is no longer authorised within the European Union, an MRL was established by the Codex Alimentarius Commission (CXL. Based on the assessment of the available data, EFSA assessed the CXL, and a consumer risk assessment was carried out. The CXL was found not to be adequately supported by data and a possible risk to consumers was identified. Hence, further consideration by risk managers is needed.
European Food Safety Authority
2014-07-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance indolylbutyric acid. Considering that this active substance is not authorised for use on edible crops within the European Union, that no MRLs are established by the Codex Alimentarius Commission, and that no import tolerances were notified to EFSA, residues of indolylbutyric acid are not expected to occur in any plant or animal commodity. Available data were also not sufficient to derive a residue definition or a limit of quantification (LOQ for enforcement against potential illegal uses.
European Food Safety Authority
2013-07-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance acetochlor. Considering that this active substance is no longer authorised within the European Union, that no MRLs are established by the Codex Alimentarius Commission, and that no import tolerances were notified to EFSA, residues of acetochlor are not expected to occur in any plant or animal commodity. Available data were also not sufficient to derive a residue definition or an LOQ for enforcement against potential illegal uses.
European Food Safety Authority
2013-05-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance cyanamide. Considering that this active substance is no longer authorised within the European Union, that no MRLs are established by the Codex Alimentarius Commission, and that no import tolerances were notified to EFSA, residues of cyanamide are not expected to occur in any plant or animal commodity. Available data were also not sufficient to derive a residue definition or an LOQ for enforcement against potential illegal uses.
European Food Safety Authority
2013-04-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance trifluralin. Considering that this active substance is no longer authorised within the European Union, that no MRLs are established by the Codex Alimentarius Commission, and that no import tolerances were notified to EFSA, residues of trifluralin are not expected to occur in any plant or animal commodity. Available data were also not sufficient to derive a residue definition or an LOQ for enforcement against potential illegal uses.
European Food Safety Authority
2013-05-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance asulam. Considering that this active substance is no longer authorised within the European Union, that no MRLs are established by the Codex Alimentarius Commission, and that no import tolerances were notified to EFSA, residues of asulam are not expected to occur in any plant or animal commodity. Available data were also not sufficient to derive a residue definition or an LOQ for enforcement against potential illegal uses.
European Food Safety Authority
2013-06-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance dicloran. Although this active substance is no longer authorised within the European Union, MRLs were established by the Codex Alimentarius Commission (CXLs. Based on the assessment of the available data, EFSA assessed the CXLs, and a consumer risk assessment was carried out. Some CXLs were found not to be adequately supported by data and, for some CXLs, a possible acute risk to consumers was also identified. Hence, further consideration by risk managers is needed.
European Food Safety Authority
2013-03-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance quinoclamine. Considering that this active substance is not authorised for use on edible crops within the European Union, that no MRLs are established by the Codex Alimentarius Commission, and that no import tolerances were notified to EFSA, residues of quinoclamine are not expected to occur in any plant or animal commodity. Available data were also not sufficient to derive a residue definition or an LOQ for enforcement against potential illegal uses.
European Food Safety Authority
2013-05-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance guazatine. Considering that this active substance is no longer authorised within the European Union, that no MRLs are established by the Codex Alimentarius Commission, and that no import tolerances were notified to EFSA, residues of guazatine are not expected to occur in any plant or animal commodity. Available data were also not sufficient to derive a residue definition or an LOQ for enforcement against potential illegal uses.
European Food Safety Authority
2013-08-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance propargite. Although this active substance is no longer authorised within the European Union, MRLs were established by the Codex Alimentarius Commission (CXLs. Based on the assessment of the available data, EFSA assessed the CXLs. CXLs were found not to be adequately supported by data and a consumer risk assessment could not be carried out. Hence, further consideration by risk managers is needed.
European Food Safety Authority
2013-05-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance 1,3-dichloropropene. Considering that this active substance is no longer authorised within the European Union, that no MRLs are established by the Codex Alimentarius Commission, and that no import tolerances were notified to EFSA, residues of 1,3-dichloropropene are not expected to occur in any plant or animal commodity. Available data were also not sufficient to derive an LOQ for enforcement against potential illegal uses.
European Food Safety Authority
2014-05-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance dodemorph. Considering that this active substance is not authorised for use on edible crops within the European Union, that no MRLs are established by the Codex Alimentarius Commission, and that no import tolerances were notified to EFSA, residues of dodemorph are not expected to occur in any plant or animal commodity. Available data were also not sufficient to derive a residue definition or an LOQ for enforcement against potential illegal uses.
European Food Safety Authority
2014-07-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance indolylacetic acid. Considering that this active substance is no longer authorised within the European Union, that no MRLs are established by the Codex Alimentarius Commission, and that no import tolerances were notified to EFSA, residues of indolylacetic acid are not expected to occur in any plant or animal commodity. Available data were also not sufficient to derive a residue definition or a limit of quantification (LOQ for enforcement against potential illegal uses.
European Food Safety Authority
2013-07-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance chloropicrin. Considering that this active substance is no longer authorised within the European Union, that no MRLs are established by the Codex Alimentarius Commission, and that no import tolerances were notified to EFSA, residues of chloropicrin are not expected to occur in any plant or animal commodity. Available data were also not sufficient to derive a residue definition or an LOQ for enforcement against potential illegal uses.
European Food Safety Authority
2013-06-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance propisochlor. Considering that this active substance is no longer authorised within the European Union, that no MRLs are established by the Codex Alimentarius Commission, and that no import tolerances were notified to EFSA, residues of propisochlor are not expected to occur in any plant or animal commodity. Available data were also not sufficient to derive a residue definition or an LOQ for enforcement against potential illegal uses.
Federal Institute for Risk Assessment
2015-01-01
Rice and rice-based products, such as rice cakes or rice flakes for creamed rice, can contain relatively high levels of inorganic arsenic. Inorganic arsenic is classified as carcinogenic for humans by international panels and no intake quantity can be defined as safe for human health with regard to its carcinogenic effect (cf. BfR opinion 018/2015). In the European Union, the introduction of maximum levels for inorganic arsenic in rice and rice products is being discussed on the basis of the ...
Guasp, J.
1995-07-01
The distribution of the exit points, on plasma border, for the lost fast ions during tangential balanced NBI in TJ-II helical axis Stellarator is theoretically analysed, as well for direct as for delayed losses. The link between, the position of those exit points and the corresponding at birth, orbits and drifts is analysed also. It is shown that such relation is rather independent of beam energy and plasma density and is mainly related to the magnetic configuration characteristics. This study is a needed intermediate step to the analysis of impacts of those ions on the vacuum vessel of TJ-II. (Author) 2 refs.
Simpson, Matthew J. R.; Milne, Glenn A.; Huybrechts, Philippe; Long, Antony J.
2009-08-01
We constrain a three-dimensional thermomechanical model of Greenland ice sheet (GrIS) evolution from the Last Glacial Maximum (LGM, 21 ka BP) to the present-day using, primarily, observations of relative sea level (RSL) as well as field data on past ice extent. Our new model (Huy2) fits a majority of the observations and is characterised by a number of key features: (i) the ice sheet had an excess volume (relative to present) of 4.1 m ice-equivalent sea level at the LGM, which increased to reach a maximum value of 4.6 m at 16.5 ka BP; (ii) retreat from the continental shelf was not continuous around the entire margin, as there was a Younger Dryas readvance in some areas. The final episode of marine retreat was rapid and relatively late (c. 12 ka BP), leaving the ice sheet land based by 10 ka BP; (iii) in response to the Holocene Thermal Maximum (HTM) the ice margin retreated behind its present-day position by up to 80 km in the southwest, 20 km in the south and 80 km in a small area of the northeast. As a result of this retreat the modelled ice sheet reaches a minimum extent between 5 and 4 ka BP, which corresponds to a deficit volume (relative to present) of 0.17 m ice-equivalent sea level. Our results suggest that remaining discrepancies between the model and the observations are likely associated with non-Greenland ice load, differences between modelled and observed present-day ice elevation around the margin, lateral variations in Earth structure and/or the pattern of ice margin retreat.
European Food Safety Authority
2014-04-01
Full Text Available EFSA was requested by the European Commission to perform a dietary exposure assessment for the proposed temporary maximum residue levels (MRLs for didecyldimethylammonium chloride (DDAC and benzalkonium chloride (BAC (0.1 mg/kg, respectively, for all food commodities covered by the EU MRL legislation. Based on the information, EFSA did not identify potential consumer health risks for these proposed MRLs. Thus, the proposed MRLs are considered to be sufficiently protective. However, due to the limited data available, the risk assessments are affected by a high degree of uncertainty.
European Food Safety Authority
2013-05-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance dichlobenil. Considering that this active substance is no longer authorised within the European Union, that no MRLs are established by the Codex Alimentarius Commission, and that no import tolerances were notified to EFSA, residues of dichlobenil are not expected to occur in any plant or animal commodity. Available data were also not sufficient to derive a residue definition or an LOQ for enforcement against potential illegal uses for the time being but this assessment may be reconsidered when the future review of MRLs for fluopicolide under the aforementioned regulation (EC No 396/2005 will be carried out because fluopicolide is an authorised pesticide active substance generating a metabolite common to dichlobenil.
European Food Safety Authority
2014-01-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance kresoxim-methyl. In order to assess the occurrence of kresoxim-methyl residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the import tolerances and European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified, some information required by the regulatory framework was found to be missing. Hence, the consumer risk assessment is considered indicative only and some MRL proposals derived by EFSA still require further consideration by risk managers.
European Food Safety Authority
2014-01-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance fenhexamid. In order to assess the occurrence of fenhexamid residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified, some information required by the regulatory framework was found to be missing. Hence, the consumer risk assessment is considered indicative only and some MRL proposals derived by EFSA still require further consideration by risk managers.
European Food Safety Authority
2014-07-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance boscalid. In order to assess the occurrence of boscalid residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the import tolerances and European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified, some information required by the regulatory framework was found to be missing. Hence, the consumer risk assessment is considered indicative only and all MRL proposals derived by EFSA still require further consideration by risk managers
European Food Safety Authority
2013-09-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance flutolanil. In order to assess the occurrence of flutolanil residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified, some information required by the regulatory framework was found to be missing. Hence, the consumer risk assessment is considered indicative only and some MRL proposals derived by EFSA still requires further consideration by risk managers.
European Food Safety Authority
2012-10-01
Full Text Available
According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance etoxazole. In order to assess the occurrence of etoxazole residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified, some information required by the regulatory framework was found to be missing. Hence, the consumer risk assessment is considered indicative only and some MRL proposals derived by EFSA still require further consideration by risk managers.
European Food Safety Authority
2014-02-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance trifloxystrobin. In order to assess the occurrence of trifloxystrobin residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the import tolerances and European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified, some information required by the regulatory framework was found to be missing. Hence, the consumer risk assessment is considered indicative only and some MRL proposals derived by EFSA still require further consideration by risk managers.
European Food Safety Authority
2013-12-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance azoxystrobin. In order to assess the occurrence of azoxystrobin residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the import tolerances andEuropean authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified, some information required by the regulatory framework was found to be missing. Hence, the consumer risk assessment is considered indicative only and some MRL proposals derived by EFSA still require further consideration by risk managers.
European Food Safety Authority
2014-05-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance folpet. In order to assess the occurrence of folpet residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified, some information required by the regulatory framework was found to be missing. Hence, the consumer risk assessment is considered indicative only and some MRL proposals derived by EFSA still require further consideration by risk managers.
European Food Safety Authority
2014-04-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance teflubenzuron. In order to assess the occurrence of teflubenzuron residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified, some information required by the regulatory framework was found to be missing. Hence, the consumer risk assessment is considered indicative only and some MRL proposals derived by EFSA still require further consideration by risk managers.
European Food Safety Authority
2012-11-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance fenamidone. In order to assess the occurrence of fenamidone residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC as well as the European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified, some information required by the regulatory framework was found to be missing. Hence, the consumer risk assessment is considered indicative only and some MRL proposals derived by EFSA still require further consideration by risk managers.
European Food Safety Authority
2013-01-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance metsulfuron-methyl. In order to assess the occurrence of metsulfuron-methyl residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC and European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified, some information required by the regulatory framework was found to be missing. Hence, the consumer risk assessment is considered indicative only and some MRL proposals derived by EFSA still require further consideration by risk managers.
European Food Safety Authority
2014-07-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance thiabendazole. In order to assess the occurrence of thiabendazole residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the import tolerances and European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified, some information required by the regulatory framework was found to be missing. Hence, the consumer risk assessment is considered indicative only and all MRL proposals derived by EFSA still require further consideration by risk managers.
European Food Safety Authority
2012-12-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance flurtamone. In order to assess the occurrence of flurtamone residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC as well as the European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. No information required by the regulatory framework was found to be missing and no risk to consumers was identified.
European Food Safety Authority
2012-11-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance carfentrazone-ethyl. In order to assess the occurrence of carfentrazone-ethyl residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC as well as the European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified, some information required by the regulatory framework was found to be missing. Hence, the consumer risk assessment is considered indicative only and all MRL proposals derived by EFSA still require further consideration by risk managers.
European Food Safety Authority
2013-02-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide acibenzolar-S-methyl. In order to assess the occurrence of acibenzolar-S-methyl residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC as well as the import tolerances and European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified, some information required by the regulatory framework was found to be missing. Hence, the consumer risk assessment is considered indicative only and some MRL proposals derived by EFSA still require further consideration by risk managers.
European Food Safety Authority
2012-11-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance mesosulfuron. In order to assess the occurrence of mesosulfuron residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC as well as the European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. No information required by the regulatory framework was found to be missing and no risk to consumers was identified.
European Food Safety Authority
2012-12-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide cyazofamid. In order to assess the occurrence of cyazofamid residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC as well as the European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified, some information required by the regulatory framework was found to be missing. Hence, the consumer risk assessment is considered indicative only and some MRL proposals derived by EFSA still require further consideration by risk managers.
European Food Safety Authority
2013-07-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance methyl bromide. Although this active substance is no longer authorised within the European Union, guideline levels for methyl bromide (at point of retail sale or when offered for consumption and MRLs for bromide ion, which is a relevant metabolite of methyl bromide, were established by the Codex Alimentarius Commission (CXLs. Regarding methyl bromide, the default MRL of 0.01 mg/kg as defined by Regulation (EC No 396/2005 is compliant with the Codex guideline levels and provides a satisfactory level of protection for the European consumer but it could not be demonstrated that the default MRL can be achieved in routine enforcement. Moreover, based on the assessment of the available data, some CXLs were found not to be adequately supported by data and the consumer risk assessment could not be finalised, as the toxicological reference values of bromide ion need to be revised and only few information on the natural occurrence of bromide ion in food was available to EFSA. Hence, further consideration by risk managers is needed.
Gutser, Raphael
2010-07-21
The injection of fast neutral particles (NBI) into a fusion plasma is an important method for plasma heating and current drive. A source for negative deuterium ions delivering an 1 MeV beam that is accelerated to a specific energy and neutralized by a gas target is required for the ITER-NBI. Cesium seeding is required to extract high negative ion current densities from these sources. The optimization of the cesium homogeneity and control are major objectives to achieve the source requirements imposed by ITER. Within the scope of this thesis, the Monte Carlo based numerical transport simulation CsFlow3D was developed, which is the first computer model that is capable of simulating the flux and the accumulation of cesium on the surfaces of negative-ion sources. Basic studies that support the code development were performed at a dedicated experiment at the University of Augsburg. Input parameters of the ad- and desorption of cesium at ion source relevant conditions were taken from systematic measurements with a quartz micro balance, while the injection rate of the cesium oven at the ion source was determined by surface ionization detection. This experimental setup was used for further investigations of the work function of cesium-coated samples during plasma exposure. (orig.)
Le Brocq, A.; Bentley, M.; Hubbard, A.; Fogwill, C.; Sugden, D.
2008-12-01
A numerical ice sheet model constrained by recent field evidence is employed to reconstruct the Last Glacial Maximum (LGM) ice sheet in the Weddell Sea Embayment (WSE). Previous modelling attempts have predicted an extensive grounding line advance (to the continental shelf break) in the WSE, leading to a large equivalent sea level contribution for the sector. The sector has therefore been considered as a potential source for a period of rapid sea level rise (MWP1a, 20 m rise in ~500 years). Recent field evidence suggests that the elevation change in the Ellsworth mountains at the LGM is lower than previously thought (~400 m). The numerical model applied in this paper suggests that a 400 m thicker ice sheet at the LGM does not support such an extensive grounding line advance. A range of ice sheet surfaces, resulting from different grounding line locations, lead to an equivalent sea level estimate of 1 - 3 m for this sector. It is therefore unlikely that the sector made a significant contribution to sea level rise since the LGM, and in particular to MWP1a. The reduced ice sheet size also has implications for the correction of GRACE data, from which Antarctic mass balance calculations have been derived.
Van de Perre, Evelien; Jacxsens, Liesbeth; Lachat, Carl; El Tahan, Fouad; De Meulenaer, Bruno
2015-01-01
In this study the impact of setting European criteria on exposure to aflatoxin B1 via nuts and figs and ochratoxin A via dried fruits is evaluated for the Belgian population, as an example of the European population. Two different scenarios were evaluated. In scenario 1 all collected literature data are considered, assuming that there is no border control nor legal limits in Europe. In the second scenario, contamination levels above the maximum limits are excluded. The results from scenario 1 demonstrated that if no regulation is in place, AFB1 and OTA concentrations reported in the analysed food can have potential health risk to the population. The estimated exposure of OTA for scenario 2 is below the TDI of 5 ng/kg BW⋅day, indicating that OTA concentrations accepted by EU legislation pose a low risk to the Belgian population. For AFB1, the MOE values of scenario 2 are above 10,000 and can be considered to be of low health concern, based on BDML10 for humans, except for figs (MOE = 5782). This means that for all matrices, with exception of figs, the maximum values of AFB1 in the European legislation are sufficient to be of a low health concern for consumers.
Thorbek, P; Hyder, K
2006-08-01
Residues on foodstuffs resulting from the use of crop-protection products are a function of many factors, e.g. environmental conditions, dissipation and application rate, some of which are linked to the physicochemical properties of the active ingredients. Residue limits (maximum residue levels (MRLs) and tolerances) of fungicides, herbicides and insecticides set by different regulatory authorities are compared, and the relationship between physicochemical properties of the active ingredients and residue limits are explored. This was carried out using simple summary statistics and artificial neural networks. US tolerances tended to be higher than European Union MRLs. Generally, fungicides had the highest residue limits followed by insecticides and herbicides. Physicochemical properties (e.g. aromatic proportion, non-carbon proportion and water solubility) and crop type explained up to 50% of the variation in residue limits. This suggests that physicochemical properties of the active ingredients may control important aspects of the processes leading to residues.
Zvolensky, Michael J; Sachs-Ericsson, Natalie; Feldner, Matthew T; Schmidt, Norman B; Bowman, Carrie J
2006-03-30
The present study evaluated a moderational model of neuroticism on the relation between smoking level and panic disorder using data from the National Comorbidity Survey. Participants (n=924) included current regular smokers, as defined by a report of smoking regularly during the past month. Findings indicated that a generalized tendency to experience negative affect (neuroticism) moderated the effects of maximum smoking frequency (i.e., number of cigarettes smoked per day during the period when smoking the most) on lifetime history of panic disorder even after controlling for drug dependence, alcohol dependence, major depression, dysthymia, and gender. These effects were specific to panic disorder, as no such moderational effects were apparent for other anxiety disorders. Results are discussed in relation to refining recent panic-smoking conceptual models and elucidating different pathways to panic-related problems.
Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael
2014-01-01
Background: Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. Objectives: We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. Methods: We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. Results: The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Conclusions: Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data. Citation: Adam-Poupart A, Brand A, Fournier M, Jerrett M, Smargiassi A. 2014. Spatiotemporal modeling of ozone levels in Quebec (Canada): a comparison of kriging, land-use regression (LUR), and combined Bayesian maximum entropy–LUR approaches. Environ Health Perspect 122:970–976; http://dx.doi.org/10.1289/ehp.1306566 PMID:24879650
Lionello, Piero; Conte, Dario; Marzo, Luigi; Scarascia, Luca
2017-04-01
The maximum level that water reaches during a storm along the coast has important consequences on coastal defences and coastal erosion. It depends on future sea level, storm surges, ocean wind generated waves, vertical land motion. The future sea level in turn depends on water mass addition and steric contributions (with a thermosteric and halosteric component). This study proposes a practical methodology for assessing the effects of these different factors (which need to be estimated at sub-regional scale) and applies it to a 7-member model ensemble of regional climate model simulations (developed and carried out in the CIRCE fp6 project) covering the period 1951-2050 under the A1B emission scenario. Sea level pressure and wind fields are used for forcing a hydro-dynamical shallow water model (HYPSE), wind fields are used for forcing a wave model (WAM), obtaining estimates of storm surges and ocean waves, respectively. Thermosteric and halosteric effects are diagnosed from the projections of sea temperature and salinity. Steric expansion and storminess are shown to be contrasting factors: in the next decades wave and storm surge maxima will decrease while thermosteric expansion will increase mean sea level. These two effects will to a large extent compensate each other, so that their superposition will increase/decrease the maximum water level along two comparable fractions of the coastline (about 15-20%) by the mid 21st century. However, mass addition across the Gibraltar Strait to the Mediterranean Sea will likely become the dominant factor and determine an increase of the maximum water level along most of the coastline.
Guasp, J.
1995-07-01
The possible deposition patterns, on the Vacuum Vessel, of lost fast ions during the balanced tangential NBI in TJ-II helical axis Stellarator are analysed theoretically, establishing the relation between those impact points, the plasma exit and birth positions and the magnetic configuration characteristics. It is shown that direct losses are the most important, mainly those produced by the beam injected with the same direction that the magnetic field, increasing with beam energy and plasma density but with impacts remaining fixed on well defined zones, a periodically distributed along the Hard Core cover plates, producing high loads at high densities. The remaining losses, except for the shine through ones that predominate at low density, are periodically distributed, with smooth maxima and produce very low loads. No overlapping between the different kind of losses or beams is observed. (Author) 6 refs.
Tanaka, Shinji; Sano, Yasushi
2011-05-01
At present, there are many narrow band imaging (NBI) magnifying observation classifications for colorectal tumor in Japan. To internationally standardize the NBI observation criteria, a simple classification system is required. When a colorectal tumor is closely observed using the recent high-resolution videocolonoscope, a pit-like pattern on the tumor can be observed to a certain degree without magnification. In the symposium we could have a consensus that we will name the pit-like pattern as 'surface pattern.' Using the NBI system, the microvessels on the tumor surface can also be recognized to a certain degree. When the NBI system is used, the structure is emphasized, and consequently, the surface pattern can be recognized easily. Recently, an international cooperative group was formed and consists of members from Japan, the USA and Europe, which is named as the Colon Tumor NBI Interest Group. This group has developed a simple category classification (NBI international colorectal endoscopic [NICE] classification), which classifies colorectal tumors into types 1-3 even by closely observing colorectal tumors using a high-resolution videocolonoscope (Validation study is now ongoing by Colon Tumor NBI Interest Group.). The key advantage of this is simplification of the NBI classification. Although the magnifying observation is the best for getting detailed NBI findings, both close observation and magnifying observation using the NICE classification might give almost similar results. Of course the NICE classification can be used more precisely with magnification. In this report we also refer the issues on NBI magnification, which should be solved as early as possible.
陈俞钱; 谢亚红; 胡纯栋
2015-01-01
中性束注入是大型托卡马克聚变装置成功和有效的辅助加热方法。中性束核心部件长脉冲弧放电离子源发展的关键在于引出系统和弧室背板（反向电子吸收板）的冷却。将反向电子吸收板永久磁体磁场简化成喇叭形磁场，即可将反向电子向反向电子吸收板的运动简化为带电粒子在喇叭形磁场的会聚螺旋运动。通过简化模型对轰击反向电子吸收板的电子流在反向电子吸收板上的沉积进行相关模拟计算，为长脉冲束引出系统做出优化借鉴。%The neutral beam injection (NBI) is a very successful and effective heating method in large Tokamak nuclear fusion device .The cooling of extraction system and back panel of arc chamber limits the development of the high current ion source w hich is the key part of NBI system .In this paper ,the magnetic field of the backstream electron dump plate permanent magnets was simplified as flaring magnetic field .So the move‐ment of backstream electrons can be simplified as helical movement of charged particle . The simulation of heat load of dump plate caused by backstream electrons was done by means of the simplified model ,w hich makes a good reference for optimizing the long pulse extraction system .
EFSA Panel on Contaminants in the Food Chain (CONTAM
2013-12-01
Full Text Available The European Food Safety Authority (EFSA was asked to deliver a scientific opinion on the risks for public health related to a possible increase of the maximum level (ML of deoxynivalenol (DON for certain semi-processed cereal products from 750 µg/kg to 1000 µg/kg. For this statement, EFSA relied on existing occurrence data on DON in food collected between 2007 and 2012 and reported by 21 European countries. Due to the lack of appropriate occurrence data from pre-market monitoring, the impact of increasing the ML was estimated using a simulation approach, resulting in an expected increase in mean levels of the respective food products by a factor of 1.14-1.16. Based on median chronic exposure in several age classes, the percentage of consumers exceeding the group provisional maximum tolerable daily intake (PMTDI of 1 μg/kg body weight (b.w. for the sum of DON and its 3- and 15-acetyl-derivatives, established by the Joint FAO/WHO Expert Committee on Food Additives (JECFA in 2010, is approximately 2-fold higher with the suggested increased ML than with the current ML. Several acute exposure scenarios resulted in exceedance of the group acute reference dose (ARfD of 8 µg/kg b.w. established by JECFA with up to 25.9 % of the consumption days above the group ARfD. The EFSA Scientific Panel on Contaminants in the Food Chain notes that the group health based guidance values (HBGVs include 3-Ac-DON and 15-Ac-DON. The exposure from the acetyl-derivatives has not been covered in this statement, since the acetyl-derivatives are not included in the current or suggested increased ML and because only few occurrence data are available. An increase of the DON ML can be expected to be associated with an increase of the levels of DON and Ac-DONs, and can therefore increase the exposure and consequently the exceedances of the group HBGVs.
Singh, Mukhtiyar [Department of Physics, Kurukshetra University, Kurukshetra-136119, Haryana (India); Saini, Hardev S. [Department of Physics, Panjab University, Chandigarh-160014 (India); Thakur, Jyoti [Department of Physics, Kurukshetra University, Kurukshetra-136119, Haryana (India); Reshak, Ali H. [New Technologies—Research Center, University of West Bohemia, Univerzitni 8, 306 14 Pilsen (Czech Republic); Center of Excellence Geopolymer and Green Technology, School of Material Engineering, University Malaysia Perlis, 01007 Kangar, Perlis (Malaysia); Kashyap, Manish K., E-mail: manishdft@gmail.com [Department of Physics, Kurukshetra University, Kurukshetra-136119, Haryana (India)
2014-12-15
We report full potential treatment of electronic and magnetic properties of Cr{sub 2−x}Fe{sub x}CoZ (Z=Al, Si) Heusler alloys where x=0.0, 0.25, 0.5, 0.75 and 1.0, based on density functional theory (DFT). Both parent alloys (Cr{sub 2}CoAl and Cr{sub 2}CoSi) are not half-metallic frromagnets. The gradual replacement of one Cr sublattice with Fe induces the half-metallicity in these systems, resulting maximum spin polarization. The half-metallicity starts to appear in Cr{sub 2−x}Fe{sub x}CoAl and Cr{sub 2−x}Fe{sub x}CoSi with x=0.50 and x=0.25, respectively, and the values of minority-spin gap and half-metallic gap or spin-flip gap increase with further increase of x. These gaps are found to be maximum for x=1.0 for both cases. An excellent agreement between the structural properties of CoFeCrAl with available experimental study is obtained. The Fermi level tuning by Fe-doping makes these alloys highly spin polarized and thus these can be used as promising candidates for spin valves and magnetic tunnelling junction applications. - Highlights: • Tuning of E{sub F} in Cr{sub 2}CoZ (Z=Al, Si) has been demonstrated via Fe doping. • Effect of Fe doping on half-metallicity and magnetism have been discussed. • The new alloys have a potential of being used as spin polarized electrodes.
European Food Safety Authority
2015-01-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance glufosinate. In order to assess the occurrence of glufosinateresidues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Some information required by the regulatory framework was found to be missing and a possible acute risk to consumers was identified. Hence, the consumer risk assessment is considered indicative only, some MRL proposals derived by EFSA still require further consideration by risk managers and measures for reduction of the consumer exposure should also be considered.
European Food Safety Authority
2014-03-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance thiacloprid. In order to assess the occurrence of thiacloprid residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the import tolerances and European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Some information required by the regulatory framework was found to be missing and a possible acute risk to consumers was identified. Hence, the consumer risk assessment is considered indicative only, some MRL proposals derived by EFSA still require further consideration by risk managers and measures for reduction of the consumer exposure should also be considered.
European Food Safety Authority
2015-01-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance pirimiphos-methyl. In order to assess the occurrence of pirimiphos-methyl residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Some information required by the regulatory framework was found to be missing and a possible chronic risk to consumers was identified. Hence, the consumer risk assessment is considered indicative only, all MRL proposals derived by EFSA still require further consideration by risk managers and measures for reduction of the consumer exposure should also be considered.
European Food Safety Authority
2013-04-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance flusilazole. In order to assess the occurrence of flusilazole residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Some information required by the regulatory framework was found to be missing and a possible acute risk to consumers was identified. Hence, the consumer risk assessment is considered indicative only, some MRL proposals derived by EFSA still require further consideration by risk managers and measures for reduction of the consumer exposure should also be considered.
European Food Safety Authority
2014-01-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance methoxyfenozide. In order to assess the occurrence of methoxyfenozide residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the import tolerances and European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Although no apparent risk to consumers was identified regarding the European authorisations, some information required by the regulatory framework was found to be missing and a possible acute risk to consumers was identified for some of the MRLs established by the Codex Alimentarius Commission. Hence, the consumer risk assessment is considered indicative only and some MRL proposals derived by EFSA still require further consideration by risk managers.
European Food Safety Authority
2013-04-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance propamocarb. In order to assess the occurrence of propamocarb residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Some information required by the regulatory framework was found to be missing and a possible acute risk to consumers was identified. Hence, the consumer risk assessment is considered indicative only, some MRL proposals derived by EFSA still require further consideration by risk managers and measures for reduction of the consumer exposure should also be considered.
European Food Safety Authority
2014-04-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance captan. In order to assess the occurrence of captan residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Some information required by the regulatory framework was found to be missing and a possible acute risk to consumers was identified. Hence, the consumer risk assessment is considered indicative only, some MRL proposals derived by EFSA still require further consideration by risk managers and measures for reduction of the consumer exposure should also be considered.
European Food Safety Authority
2012-10-01
Full Text Available
According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance chlorothalonil. In order to assess the occurrence of chlorothalonil residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the import tolerances and European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals for parent chlorothalonil in plant commodities and for 2,5,6-trichloro-4-hydroxyphtalonitrile (SDS-3701 in animal commodities were derived, and a consumer risk assessment was carried out. Some information required by the regulatory framework was found to be missing (in particular with regard to metabolite SDS-3701 and a possible acute risk to consumers was identified for parent chlorothalonil. Hence, the consumer risk assessment is considered indicative only, all MRL proposals derived by EFSA still require further consideration by risk managers and measures for reduction of the consumer exposure should also be considered.
European Food Safety Authority
2014-01-01
Full Text Available According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance lambda-cyhalothrin. In order to assess the occurrence of lambda-cyhalothrin residues in plants, processed commodities, rotational crops and livestock, EFSA considered the conclusions derived in the framework of Directive 91/414/EEC, the MRLs established by the Codex Alimentarius Commission as well as the import tolerances and European authorisations reported by Member States (incl. the supporting residues data. Based on the assessment of the available data, MRL proposals were derived and a consumer risk assessment was carried out. Some information required by the regulatory framework was found to be missing and a possible acute risk to consumers was identified. Hence, the consumer risk assessment is considered indicative only, some MRL proposals derived by EFSA still require further consideration by risk managers and measures for reduction of the consumer exposure should also be considered.
Melnikov, A. V.; Eliseev, L. G.; Castejón, F.; Hidalgo, C.; Khabanov, P. O.; Kozachek, A. S.; Krupnik, L. I.; Liniers, M.; Lysenko, S. E.; de Pablos, J. L.; Sharapov, S. E.; Ufimtsev, M. V.; Zenin, V. N.; HIBP Group; TJ-II Team
2016-11-01
Alfvén eigenmodes (AEs) were studied in low magnetic shear flexible heliac TJ-II (B 0 = 0.95 T, R 0 = 1.5 m, = 0.22 m) neutral beam injection (NBI) heated plasmas (P NBI ⩽ 1.1 MW, E NBI = 32 keV) using the heavy ion beam probe (HIBP). L-mode hydrogen plasmas heated with co-, counter- and balanced-NBI and electron cyclotron resonance heating (ECRH) were investigated in various magnetic configurations with rotational transform ι(a)/2π = 1/q ~ 1.5-1.6. The HIBP diagnostic is capable of simultaneously measuring the oscillations of the plasma electric potential, density and poloidal magnetic field. In earlier studies chirping modes have been observed with 250 kHz electric potential perturbations have a ballooning character, while the density and B pol perturbations are nearly symmetric for both ECRH + NBI and NBI-only plasmas. On TJ-II, the dominant effect on the nonlinear evolution of the AE from the chirping state to the steady-frequency state is the magnetic configuration, determined by the vacuum ι and plasma current I pl.
Garrido, Nuno D; Silva, António J; Fernandes, Ricardo J; Barbosa, Tiago M; Costa, Aldo M; Marinho, Daniel A; Marques, Mário C
2012-06-01
The relationship between handgrip isometric strength and swimming performance was assessed in the four competitive swimming strokes in swimmers of different age groups and of both sexes. 78 national-level Portuguese swimmers (39 males, 39 females) were selected for this study. Grip strength, previously used as a marker of overall strength to predict future swimming performance, was measured using a hand dynamometer. The best competitive time at 100 and 200 m in all four swimming strokes were converted into 2010 FINA points. Non-parametric tests were used to evaluate differences between groups. Pearson product-moment correlations were computed to verify the association between variables. Handgrip maximum isometric strength was significantly correlated with swimming performance, particularly among female swimmers. Among female age group swimmers, the relationship between handgrip and 100-m freestyle was significant. Handgrip isometric strength seems to be related to swimming performance, especially to 100-m freestyle and in female swimmers. For all other distances and strokes, technique and training probably are more influential than semi-hereditary strength markers such as grip strength.
Doganlar, Oguzhan; Doganlar, Zeynep Banu; Tabakcioglu, Kiymet
2015-10-01
In this study, we aimed to investigate the mutagenic and carcinogenic potential of a volatile organic compound (VOC) mixture with references to the response of D.melanogaster using selected antioxidant gene expressions, RAPD assay and base-pair change of ribosomal 18S, and the internal transcribed spacer, ITS2 rDNA gene sequences. For this purpose, Drosophila melanogaster Oregon R, reared under controlled conditions on artificial diets, were treated with the mixture of thirteen VOCs, which are commonly found in water in concentrations of 10, 20, 50, and 75 ppb for 1 and 5 days. In the random amplified polymorphic DNA (RAPD) assay, band changes were clearly detected, especially at the 50 and 75 ppb exposure levels, for both treatment periods, and the band profiles exhibited clear differences between the treated and untreated flies with changes in band intensity and the loss/appearance of bands. Quantitative real-time PCR (qRT-PCR) analysis of Mn-superoxide dismutase (Mn-SOD), catalase (CAT) and glutathione-synthetase (GS) expressions demonstrated that these markers responded significantly to VOC-induced oxidative stress. Whilst CAT gene expressions increased linearly with increasing concentrations of VOCs and treatment times, the 50- and 75-ppb treatments caused decreases in GS expressions compared to the control at 5 days. Treatment with VOCs at both exposure times, especially in high doses, caused gene mutation of the 18S and the ITS2 ribosomal DNA. According to this research, we thought that the permissible maximum-contamination level of VOCs can cause genotoxic effect especially when mixed.
I Made Oka Adi Parwata
2016-09-01
Full Text Available Background: Oxidative stress occurs due to an imbalance of the number of free radicals by the number of endogenous antioxidant produced by the body i.e. Superoxide Dismutase (SOD, Gluthathione Peroxidase (GPx, and Catalase. The imbalance between the number of free radicals and antioxidants can be overcome with the endogenous antioxidant intake that exogenous oxidative stress can be reduced. One of exogenous antioxidants is natural Gaharu leaf water extract. Objective: This research focus on the effect of Gaharu leaf water extract in reducing MDA and 8-OHdG and increase the activity of SOD and Catalase. Methods: This study was an experimental with post only controls group design. Experiment was divided into 5 groups of wistar rats, each consisting of 5 animals, i.e. negative control group without extract [K (-], treatment 1 treated 50 mg/kg BW/day of the extract (T1, treatment 2 treated 100 mg/kg BW/day of the extract (T2, treatment 3 treated 200 mg/ kg BW/day of the extract (T3, and positive control group [K (+] treated with vitamin Cat a dose 50 mg/kg BW/day. All groups treated for 10 weeks. Every day, before treatment, each group was given a maximum swimming activity for 1.5 hours for 10 weeks. ELISA was used to measure MDA, 8-OHdG, SOD, and Catalase activities. Result: The research results showed that treatment of extract of leaves of Gaharu with an higher dose from 50 mg/kg BW up to 200 mg/ kg BW significantly decline (p <0.05 levels of MDA with the average ranging from 6.37±0.23, 5,56±0.27 and 4.32±0.27, 8-OHdG with a mean of 1.64±0.11, 1.26±0.46, and 1.09±0.17. On the other hand the treatment also increase SOD activity with less ranging from 12.15±1.04, 15.70±2.02, and 18.84±1.51, and Catalase ranging from 6,68±0.63, 8.20±1.14 and 9.29±0,79 in the blood of Wistar rats were given a maximum activity compared to the negative control group. This is probably higher phenol compounds (bioflavonoids quantity content of the extract
Sluijs, A.; van Roij, L.; Harrington, G. J.; Schouten, S.; Sessa, J. A.; LeVay, L. J.; Reichart, G.-J.; Slomp, C. P.
2014-07-01
The Paleocene-Eocene Thermal Maximum (PETM, ~ 56 Ma) was a ~ 200 kyr episode of global warming, associated with massive injections of 13C-depleted carbon into the ocean-atmosphere system. Although climate change during the PETM is relatively well constrained, effects on marine oxygen concentrations and nutrient cycling remain largely unclear. We identify the PETM in a sediment core from the US margin of the Gulf of Mexico. Biomarker-based paleotemperature proxies (methylation of branched tetraether-cyclization of branched tetraether (MBT-CBT) and TEX86) indicate that continental air and sea surface temperatures warmed from 27-29 to ~ 35 °C, although variations in the relative abundances of terrestrial and marine biomarkers may have influenced these estimates. Vegetation changes, as recorded from pollen assemblages, support this warming. The PETM is bracketed by two unconformities. It overlies Paleocene silt- and mudstones and is rich in angular (thus in situ produced; autochthonous) glauconite grains, which indicate sedimentary condensation. A drop in the relative abundance of terrestrial organic matter and changes in the dinoflagellate cyst assemblages suggest that rising sea level shifted the deposition of terrigenous material landward. This is consistent with previous findings of eustatic sea level rise during the PETM. Regionally, the attribution of the glauconite-rich unit to the PETM implicates the dating of a primate fossil, argued to represent the oldest North American specimen on record. The biomarker isorenieratene within the PETM indicates that euxinic photic zone conditions developed, likely seasonally, along the Gulf Coastal Plain. A global data compilation indicates that O2 concentrations dropped in all ocean basins in response to warming, hydrological change, and carbon cycle feedbacks. This culminated in (seasonal) anoxia along many continental margins, analogous to modern trends. Seafloor deoxygenation and widespread (seasonal) anoxia likely
LIAO Leng; JIN Kui-Juan; HAN Peng; ZHANG Li-Li; L(U) Hui-Bin; GE Chen
2009-01-01
Photovoltaic response in the hereto junction of La1- x Srx Mn O3/SrNby Ti1-y O3 (LSMO/SNTO) is analyzed theoretically based on the drift-diffusion model. It is found that the decrease of acceptor concentration in the La1-xSrxMnO3 layer of heterojunction can increase the peak value of photovoltaic signal and the speed of photovoltaic response, whereas the changing of donor concentration in the SrNbyTi1-yO3 layer has no such evident effect. Furthermore, the result also indicates that the modulation of Sr doping in La1-xSrxMnO3 is an effective method to accommodate the sensitivity and the speed of photovoltaic response for LSMO/SNTO photoelectric devices.
Sarapultseva, E I; Igolkina, J V; Litovchenko, A V
2009-04-01
Electromagnetic radiation at the mobile connection frequency (1 GHz) at maximum energy flow density (10 microW/cm(2)) permitted in Russia causes serious functional disorders in the studied unicellular hydrobionts infusoria Spirostomum ambiguum: reduction of their spontaneous motor activity. The form of biological reaction is uncommon: the effect is threshold, overall, and does not depend on the duration of microwave exposure.
Sluijs, A.; van Roij, L.; Harrington, G.J.; Schouten, S.; Sessa, J.A.; LeVay, L.J.; Reichart, G.-J.; Slomp, C.P.
2014-01-01
The Paleocene–Eocene Thermal Maximum (PETM, ~ 56 Ma) was a ~ 200 kyr episode of global warming, associated with massive injections of 13C-depleted carbon into the ocean–atmosphere system. Although climate change during the PETM is relatively well constrained, effects on marine oxygen concentrations
Sluijs, A.; van Roij, L.; Harrington, G.J.; Schouten, S.; Sessa, J.A.; LeVay, L.J.; Reichart, G.-J.; Slomp, C.P.
2014-01-01
The Paleocene–Eocene Thermal Maximum(PETM, ?56 Ma) was a ?200 kyr episode of globalwarming, associated with massive injections of 13C-depletedcarbon into the ocean–atmosphere system. Although climatechange during the PETM is relatively well constrained,effects on marine oxygen concentrations and nut
Botterón, Tania Vanesa
2005-01-01
Full Text Available Estudios previos indican que Pto. Madryn presenta características de una población en transición nutricional, con bajos índices de desnutrición e incremento en la incidencia de sobrepeso infantil. En relación a ello en el presente trabajo se evalúa la prevalencia de sobrepeso y obesidad en niños y adolescentes de barrios con alto promedio de familias con NBI y valorados con tres referencias: NCHS, SAP-IOTF y Frisancho, 1991. Se relevaron peso, estatura total, pliegues subescapular y tricipital y se calculó el IMC en 656 niños de ambos sexos (6 a 16 años. El análisis de datos se realizó según las referencias indicadas para comparar sus valoraciones. Los resultados obtenidos hasta el presente muestran para ambos sexos un promedio de 60 a 75 % de valores de IMC normales, un 25 % de sobrepeso y entre un 3 a 7 % de obesidad Esto se relaciona con lo hallado por otros autores, por cuanto el sobrepeso y la obesidad son independientes de la condición socioeconómica de los individuos y, contar con datos actualizados y transferirlos a las entidades de aplicación correspondientes, redundarán en una disminución del riesgo de obesidad y de enfermedades crónicas no transmisibles (ECNT.
Effect of magnetic configuration on frequency of NBI-driven Alfvén modes in TJ-II
Melnikov, A. V.; Ochando, M.; Ascasibar, E.; Castejon, F.; Cappa, A.; Eliseev, L. G.; Hidalgo, C.; Krupnik, L. I.; Lopez-Fraguas, A.; Liniers, M.; Lysenko, S. E.; de Pablos, J. L.; Perfilov, S. V.; Sharapov, S. E.; Spong, D. A.; Jimenez, J. A.; Ufimtsev, M. V.; Breizman, B. N.; HIBP Group; the TJ-II Team
2014-12-01
Excitation of modes in the Alfvénic frequency range, 30 kHz values, 1.51advantage of the unique TJ-II capabilities, a dynamic magnetic configuration experiment with \\unicode{7548} (ρ , t) variation during discharges has shown strong effects on the mode frequency via both vacuum \\unicode{7548} changes and induced net plasma current. A drastic frequency increase from ˜50 to ˜250 kHz was observed for some modes when plasma current as low as ±2 kA was induced by small (10%) changes in the vertical field. A comprehensive set of diagnostics including a heavy ion beam probe, magnetic probes and a multi-chord bolometer made it possible to identify the spatial spread of the modes and deduce the internal amplitudes of their plasma density and magnetic field perturbations. A simple analytical model for fAE, based on the local Alfvén eigenmode (AE) dispersion relation, was proposed to characterize the observation. It was shown that all the observations, including vacuum iota and plasma current variations, may be fitted by the model, so the linear mode frequency dependence on \\unicode{7548} (plasma current) and one over square root density dependence present the major features of the NBI-induced AEs in TJ-II, and provide the framework for further experiment-to-theory comparison.
Da Costa, M J; Colson, G; Frost, T J; Halley, J; Pesti, G M
2017-09-01
The objective of this experiment was to determine the maximum net returns digestible lysine (dLys) levels (MNRL) when maintaining the ideal amino acid ratio for starter diets of broilers raised sex separate or comingled (straight-run). A total of 3,240 Ross 708 chicks was separated by sex and placed in 90 pens by 2 rearing types: sex separate (36 males or 36 females) or straight-run (18 males + 18 females). Each rearing type was fed 6 starter diets (25 d) formulated to have dLys levels between 1.05 and 1.80%. A common grower diet with 1.02% of dLys was fed from 25 to 32 days. Body weight gain (BWG) and feed intake were assessed at 25 and 32 d for performance evaluation. Additionally, at 26 and 33 d, 4 birds per pen were sampled for carcass yield evaluation. Data were modeled using response surface methodology in order to estimate feed intake and whole carcass weight at 1,600 g live BW. Returns over feed cost were estimated for a 1.8-million-broiler complex of each rearing system under 9 feed/meat price scenarios. Results indicated that females needed more feed to reach market weight, followed by straight-run birds, and then males. At medium meat and feed prices, female birds had MNRL at 1.07% dLys, whereas straight-run and males had MNRL at 1.05%. As feed and meat prices increased, females had MNRL increased up to 1.15% dLys. Sex separation resulted in increased revenue under certain feed and meat prices, and before sex separation cost was deducted. When the sexing cost was subtracted from the returns, sex separation was not shown to be economically viable when targeting birds for light market BW. © 2017 Poultry Science Association Inc.
Assembly and gap management strategy for the ITER NBI vessel passive magnetic shield
Ríos, Luis, E-mail: luis.rios@ciemat.es [CIEMAT Laboratorio Nacional de Fusión, Avda. Complutense 22, 28040 Madrid (Spain); Ahedo, Begoña; Alonso, Javier; Barrera, Germán; Cabrera, Santiago; Rincón, Esther; Ramos, Francisco [CIEMAT Laboratorio Nacional de Fusión, Avda. Complutense 22, 28040 Madrid (Spain); El-Ouazzani, Anass; Graceffa, Joseph; Urbani, Marc; Shah, Darshan [ITER Organization, Route de Vinon-sur-Verdon – CS 90 046, 13067 St Paul Lez Durance Cedex (France); Agarici, Gilbert [Fusion for Energy, Josep Pla 2, Torres Diagonal Litoral B3 – 07/08, 08019 Barcelona (Spain)
2015-10-15
The neutral beam system for ITER consists of two heating and current drive neutral ion beam injectors (HNB) and a diagnostic neutral beam (DNB) injector. The proposed physical plant layout allows a possible third HNB injector to be installed later. The HNB Passive Magnetic Shield (PMS) works in conjunction with the active compensation/correction coils to limit the magnetic field inside the Beam Line Vessel (BLV), Beam Source Vessel (BSV), High Voltage Bushing (HVB) and Transmission Line (TL) elbow to acceptable levels that do not interfere with the operation of the HNB components. This paper describes the current design of the PMS, having had only minor modifications since the preliminary design review (PDR) held in IO in April 2013, and the assembly strategy for the vessel PMS.
The n-by- T Target Discharge Strategy for Inpatient Units.
Parikh, Pratik J; Ballester, Nicholas; Ramsey, Kylie; Kong, Nan; Pook, Nancy
2017-07-01
Ineffective inpatient discharge planning often causes discharge delays and upstream boarding. While an optimal discharge strategy that works across all units at a hospital is likely difficult to identify and implement, a strategy that provides a reasonable target to the discharge team appears feasible. We used observational and retrospective data from an inpatient trauma unit at a Level 2 trauma center in the Midwest US. Our proposed novel n-by-T strategy-discharge n patients by the Tth hour-was evaluated using a validated simulation model. Outcome measures included 2 measures: time-based (mean discharge completion and upstream boarding times) and capacity-based (increase in annual inpatient and upstream bed hours). Data from the pilot implementation of a 2-by-12 strategy at the unit was obtained and analyzed. The model suggested that the 1-by-T and 2-by-T strategies could advance the mean completion times by over 1.38 and 2.72 h, respectively (for 10 AM ≤ T ≤ noon, occupancy rate = 85%); the corresponding mean boarding time reductions were nearly 11% and 15%. These strategies could increase the availability of annual inpatient and upstream bed hours by at least 2,469 and 500, respectively. At 100% occupancy rate, the hospital-favored 2-by-12 strategy reduced the mean boarding time by 26.1%. A pilot implementation of the 2-by-12 strategy at the unit corroborated with the model findings: a 1.98-h advancement in completion times (Pstrategies, such as the n-by-T, can help substantially reduce discharge lateness and upstream boarding, especially during high unit occupancy. To sustain implementation, necessary commitment from the unit staff and physicians is vital, and may require some training.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Kang-Woong Kim
2016-07-01
Full Text Available Abstract We determined the optimum dietary protein level in juvenile barred knifejaw Oplegnathus fasciatus in cages. Five semi-purified isocaloric diets were formulated with white fish meal and casein-based diets to contain 35, 40, 45, 50, and 60 % crude protein (CP. Fish with an initial body weight of 7.1 ± 0.06 g (mean ± SD were randomly distributed into 15 net cages (each size: 60 cm × 40 cm × 90 cm, W × L × H as groups of 20 fish in triplicates. The fish were fed at apparent satiation level twice a day. After 8 weeks of feeding, the weight gain (WG of fish fed 45, 50, and 60 % CP diets were significantly higher than those of fish fed 35 and 40 % CP diets. However, there were no significant differences in WG among fish fed 45, 50, and 60 % CP diets. Generally, feed efficiency (FE and specific growth rate (SGR showed a similar trend as WG. However, the protein efficiency ratio (PER was inversely related to dietary protein levels. Energy retention efficiency increased with the increase of dietary protein levels by protein sparing from non-protein energy sources. Blood hematocrit content was not affected by dietary protein levels. However, a significantly lower amount of hemoglobin was found in fish fed 35 % CP than in fish fed 40, 45, 50, and 60 % CP diets. Fish fed 60 % CP showed the lowest survival rate than the fish fed 35, 40, 45, and 50 % CP diets. Broken-line analysis of WG showed the optimum dietary protein level was 45.2 % with 18.8 kJ/g diet for juvenile barred knifejaw. This study has potential implication for the successful cage culture of barred knifejaw.
Ali Arkamose Assani
2016-10-01
Full Text Available Various manmade features (diversions, dredging, regulation, etc. have affected water levels in the Great Lakes and their outlets since the 19th century. The goal of this study is to analyze the impacts of such features on the stationarity and dependence between monthly mean maximum and minimum water levels in the Great Lakes and St. Lawrence River from 1919 to 2012. As far as stationarity is concerned, the Lombard method brought out shifts in mean and variance values of monthly mean water levels in Lake Ontario and the St. Lawrence River related to regulation of these waterbodies in the wake of the digging of the St. Lawrence Seaway in the mid-1950s. Water level shifts in the other lakes are linked to climate variability. As for the dependence between water levels, the copula method revealed a change in dependence mainly between Lakes Erie and Ontario following regulation of monthly mean maximum and minimum water levels in the latter. The impacts of manmade features primarily affected the temporal variability of monthly mean water levels in Lake Ontario.
Si, Yanmei; Sun, Zongzhao; Zhang, Ning; Qi, Wei; Li, Shuying; Chen, Lijun; Wang, Hua
2014-10-21
An ultrasensitive sandwich-type analysis method has been initially developed for probing low-level free microRNAs (miRNAs) in blood by a maximal signal amplification protocol of catalytic silver deposition. Gold nanoclusters (AuNCs) were first synthesized and in-site incorporated into alkaline phosphatase (ALP) to form the ALP-AuNCs. Unexpectedly, the so incorporated AuNCs could dramatically enhance the catalysis activities of ALP-AuNCs versus native ALP. A sandwiched hybridization protocol was then proposed using ALP-AuNCs as the catalytic labels of the DNA detection probes for targeting miRNAs that were magnetically caught from blood samples by DNA capture probes, followed by the catalytic ligation of two DNA probes complementary to the targets. Herein, the ALP-AuNC labels could act as the bicatalysts separately in the ALP-catalyzed substrate dephosphorylation reaction and the AuNCs-accelerated silver deposition reaction. The signal amplification of ALP-AuNCs-catalyzed silver deposition was thereby maximized to be measured by the electrochemical outputs. The developed electroanalysis strategy could allow for the ultrasensitive detection of free miRNAs in blood with the detection limit as low as 21.5 aM, including the accurate identification of single-base mutant levels in miRNAs. Such a sandwich-type analysis method may circumvent the bottlenecks of the current detection techniques in probing short-chain miRNAs. It would be tailored as an ultrasensitive detection candidate for low-level free miRNAs in blood toward the diagnosis of cancer and the warning or monitoring of cancer metastasis in the clinical laboratory.
Drinking Water Maximum Contaminant Levels (MCLs)
U.S. Environmental Protection Agency — National Primary Drinking Water Regulations (NPDWRs or primary standards) are legally enforceable standards that apply to public water systems. Primary standards...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Magnetic properties of Aurivillius lanthanide-bismuth (LnFeO3nBi4Ti3O12 (n = 1,2 layered titanates
Tartaj, J.
2008-06-01
Full Text Available Bismuth titanates of Aurivillius layer-structure (BiFeO3nBi4Ti3O12, are of great technological interest because of their applications as non-volatile ferroelectric memories and high-temperature piezoelectric materials. The synthesis and crystallographic characterization of a new family of compounds (LnFeO3nBi4Ti3O12 was recently reported, in which the layers consist of LnFeO3 perovskites with a lanthanide Ln3+ substituting diamagnetic Bi3+. We report herein the magnetic properties of bulk samples, with Ln = Nd, Eu, Gd and Tb, and n = 1 and 2. Single-layer materials are paramagnetic, similar to non-substituted bismuth titanate Bi5FeTi3O15, and show crystal field effects due to the crystallographic environment of Eu3+ and Tb3+. Several anomalies are detected in the magnetization M(T of double-layer (LnFeO32Bi4Ti3O12 compounds, related to the strong magnetism of Tb and Gd, since they weakly appear for Nd and they are absent in the VanVleck Eu3+ ion and in the parent Bi6Fe2Ti3O18 compound.Los titanatos de hierro y bismuto con estructura laminar tipo Aurivillius, (BiFeO3nBi4Ti3O12, tienen un gran interés tecnológico debido a sus aplicaciones como memorias ferroeléctricas no volátiles y como piezoeléctrico cerámico de alta temperatura. La síntesis y la caracterización cristalina de una nueva familia de compuestos (LnFeO3nBi4Ti3O12 han sido recientemente reportadas, en la que el catión diamagnético Bi3+ ha sido sustituido por los paramagnéticos Ln3+ en los bloques de perovskita. Se estudian las propiedades magnéticas de muestras cerámicas en volumen con Ln = Nd, Eu, Gd y Tb, y n = 1 y 2. Los materiales con n=1 son paramagnéticos y similares al no sustituido Bi5FeTi3O15, y muestran efectos de campo cristalino debido al entorno cristalino de Eu3+ y Tb3+. Se han detectado algunas anomalías en la magnetización M(T de los compuestos n=2 (LnFeO32Bi4Ti3O12 que están relacionadas con el fuerte magnetismo de Tb y Gd, que aparecen d
Pedro Henrique de Cerqueira Luz
2000-08-01
Full Text Available Numa pastagem degradada de capim-Tobiatã (Panicum maximum Jacq cv. Tobiatã, em Pirassununga - SP, instalou-se um experimento para verificar os efeitos de doses e tipos de calcário com ou sem incorporação, sobre o perfilhamento, a cobertura vegetal e a produtividade da pastagem durante seis cortes no período de 1996 a 1997. A produção de matéria seca não respondeu a tipos e doses de calcário, no entanto, a prática da incorporação com grade mostrou-se efetiva e, no tocante aos cortes, houve acréscimo de produção no verão e redução no inverno. A cobertura vegetal apresentou 72,8% de ocupação para planta forrageira e indicou tendência de menores áreas de solo descoberto nos tratamentos com calcário calcinado.In a degraded pasture of Tobiatã grass (Panicum maximum Jacq cv. Tobiatã, in Pirassununga - SP, an experiment was carried out to observe the effects of levels and types of limestone with or without incorporation on the tillering, ground cover and pasture productivity during six cuts from 1996 to 1997. There was no response in the dry matter yield to by the levels and types of limestone, however the practice of limestone incorporation using harrow was effective, and based in the cuts there was an increase of dry matter production in the summer and reduction in the winter. The ground cover presented 72.8% of occupation by the forage grass and it indicated a trend of bare ground in the treatments with calcinated limestone.
European Food Safety Authority
2012-07-01
Full Text Available
According to Article 12 of Regulation (EC No 396/2005, the European Food Safety Authority (EFSA has reviewed the Maximum Residue Levels (MRLs currently established at European level for the pesticide active substance petroleum oils (CAS 92062-35-6. Considering that this active substance is no longer authorised within the European Union, that no MRLs are established by the Codex Alimentarius Commission, and that no import tolerances were notified to EFSA, residues of petroleum oils (CAS 92062-35-6 are not expected to occur in any plant or animal commodity. Available data were also not sufficient to derive a residue definition or an LOQ for enforcement against potential illegal uses.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
丁明; 刘盛
2013-01-01
The random fluctuation and Intermittency of photovoltaic (PV) generation system bring obvious affection on power grid. The higher the capacity of PV generation system, the more obvious the affection will be. A genetic algorithm (GA) based approach for the calculation of maximum penetration level of multi PV generations simultaneously connected to distribution network is proposed. In the proposed approach, the abrupt change of output of PV generations in distribution network and the condition that on-load tap changer (OLTC) and shunt capacitor participate in voltage regulation are taken into account. To verify the effectiveness of the proposed method, IEEE 33-bus test system is taken for example and calculation results show that the network-connected positions of PV generations, load level of the test system and power factors of PV generations evidently affect the maximum penetration level of multi PV generations.% 光伏电站出力具有随机波动性和间歇性，给电网带来很大的影响，且容量越大，影响越显著。提出了基于遗传算法求解多个光伏电源同时接入配电网的极限功率的计算方法。该方法考虑了配电网中光伏电源出力突变以及有载调压变压器和并联电容器参与调压的情况。以IEEE33测试系统为例，分析了光伏电源接入位置、电网负荷水平以及光伏电源功率因数对极限功率的影响，验证了该方法的有效性。
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
孟娣; 宋怿; 房金岑
2011-01-01
The status quo of establishing maximum use levels for food additives in aquatic products were introduced in this study, which analyzed the difference in food additives standards among codex alimentarius commission ( CAC) ,American, Japan , EU and China, then advanced some measures and advices for advancing national standards.%介绍CAC、美国、日本、欧盟等我国几个主要水产品出口对象的食品添加剂限量标准,并与我国相关标准要求进行了对比分析,探讨了我国现行水产品中食品添加剂限量标准存在的问题,提出了对策和建议.
Pozzo, L; Cavallarin, L; Antoniazzi, S; Guerre, P; Biasibetti, E; Capucchio, M T; Schiavone, A
2013-05-01
The European Commission Recommendation 2006/576/EC indicates that the maximum tolerable level of ochratoxin A (OTA) in poultry feeds is 0.1 mg OTA/kg. Thirty-six 1-day-old male broiler chicks were divided into two groups, a control (basal diet) and an OTA (basal diet + 0.1 mg OTA/kg) group. The OTA concentration was quantified in serum, liver, kidney, breast and thigh samples. The thiobarbituric acid reactive substances (TBARS) content were evaluated in the liver, kidney, breast and thigh samples. The glutathione (GSH) content, and catalase (CAT) and superoxide dismutase (SOD) activity were measured in the liver and kidney samples. Histopathological traits were evaluated for the spleen, bursa of Fabricius and liver samples. Moreover, the chemical composition of the meat was analysed in breast and thigh samples. In the OTA diet-fed animals, a serum OTA concentration of 1.15 ± 0.35 ng/ml was found, and OTA was also detected in kidney and liver at 3.58 ± 0.85 ng OTA/g f.w. and 1.92 ± 0.21 ng OTA/g f.w., respectively. The TBARS content was higher in the kidney of the ochratoxin A group (1.53 ± 0.18 nmol/mg protein vs. 0.91 ± 0.25 nmol/mg protein). Feeding OTA at 0.1 mg OTA/kg also resulted in degenerative lesions in the spleen, bursa of Fabricius and liver. The maximum tolerable level of 0.1 mg OTA/kg, established for poultry feeds by the EU, represents a safe limit for the final consumer, because no OTA residues were found in breast and thigh meat. Even though no clinical signs were noticed in the birds fed the OTA-contaminated diet, moderate histological lesions were observed in the liver, spleen and bursa of Fabricius.
Leijala, Ulpu; Björkqvist, Jan-Victor; Johansson, Milla M.; Pellikka, Havu
2017-04-01
Future coastal management continuously strives for more location-exact and precise methods to investigate possible extreme sea level events and to face flooding hazards in the most appropriate way. Evaluating future flooding risks by understanding the behaviour of the joint effect of sea level variations and wind waves is one of the means to make more comprehensive flooding hazard analysis, and may at first seem like a straightforward task to solve. Nevertheless, challenges and limitations such as availability of time series of the sea level and wave height components, the quality of data, significant locational variability of coastal wave height, as well as assumptions to be made depending on the study location, make the task more complicated. In this study, we present a statistical method for combining location-specific probability distributions of water level variations (including local sea level observations and global mean sea level rise) and wave run-up (based on wave buoy measurements). The goal of our method is to obtain a more accurate way to account for the waves when making flooding hazard analysis on the coast compared to the approach of adding a separate fixed wave action height on top of sea level -based flood risk estimates. As a result of our new method, we gain maximum elevation heights with different return periods of the continuous water mass caused by a combination of both phenomena, "the green water". We also introduce a sensitivity analysis to evaluate the properties and functioning of our method. The sensitivity test is based on using theoretical wave distributions representing different alternatives of wave behaviour in relation to sea level variations. As these wave distributions are merged with the sea level distribution, we get information on how the different wave height conditions and shape of the wave height distribution influence the joint results. Our method presented here can be used as an advanced tool to minimize over- and
Luiz Augusto Fonseca Magalhães
2007-09-01
Full Text Available
A deficiência de fósforo nos solos brasileiros, aliada à acidez natural dos solos de cerrado, contribuem para os baixos índices produtivos da pecuária nacional. O objetivo deste trabalho foi avaliar o efeito da calagem e de diferentes fontes e dosagens de fósforo na produção de matéria seca do capim Tanzânia (Panicum maximum Jacq. cv. Tanzânia. O experimento foi dividido em dois grupos, G1 e G2. No grupo G1 avaliaram-se três níveis de calagem (sem correção e com correções para elevar a saturação de bases para 30 e 60%, com doses de calcário de 1,12 e 2,64 t/ha, respectivamente e três fontes de fósforo (superfosfato simples, termofosfato Yoorin e hiperfosfato de Arad. No grupo G2 foram avaliados os mesmos três níveis de calagem e cinco doses de fósforo (0, 30, 60, 120 e 240 kg/ha de P. No G1 não houve interação entre as fontes de fósforo e a calagem, nem ocorreu efeito da calagem. Houve, contudo, diferenças significativas entre as fontes de fósforo, sendo o superfosfato simples superior ao termofosfato Yoorin e ao hiperfosfato de Arad. A produção de matéria seca no grupo G2 não foi influenciada pelos níveis de calagem. Contudo, houve diferença significativa entre os níveis de fósforo, com a máxima produção de matéria seca obtida com 172,8 kg/ha de P (395,6 kg/ha de P2O5. Não houve interação significativa entre calagem e dosagens de fósforo. Esse resultado reafirma a importância da adubação fosfatada para a produção, nos solos dos cerrados.
PALAVRAS-CHAVE: Fósforo; calagem; forragem; tanzânia.
The phosphorus deficiency and the natural acidity of Brazilian savannah soils contribute to the low productivity of Brazilian livestock raising. This research evaluated the effects of liming levels and phosphorus sources and levels in
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Américo Fróes Garcez Neto
2002-09-01
Full Text Available As características morfogênicas e estruturais da gramínea Panicum maximum cv. Mombaça foram estudadas em função de diferentes níveis de suprimento de nitrogênio e alturas de corte. O estudo foi conduzido em casa de vegetação, sendo avaliadas quatro doses de nitrogênio (0, 50, 100 e 200 mg/dm³ e três alturas de corte (5, 10 e 20 cm. As avaliações morfogênicas englobaram as taxas de aparecimento e alongamento de folhas, filocrono e duração de vida da folha. As estruturais avaliaram o número de folhas, o número de perfilhos e o comprimento final da lâmina foliar. Foi bastante expressiva a resposta da gramínea quanto às suas características morfogênicas em relação ao suprimento de nitrogênio na rebrotação, caracterizando o importante papel do nitrogênio como ferramenta para manipular a estrutura da planta. Todas as variáveis no estudo responderam positivamente ao suprimento de nitrogênio, com exceção do filocrono, que foi reduzido pelo efeito nutricional. As diferenças nas alturas de corte foram significativas para caracterização da duração de vida da folha, assim como para o comprimento de folhas e número de folhas verdes por perfilho. As taxas de alongamento e aparecimento de folhas foram incrementadas em até 133 e 104%, respectivamente, pelo aumento na disponibilidade de nitrogênio. A relação entre ambas as variáveis foi determinante na caracterização das principais mudanças vegetativas observadas. A grande resposta nas características morfogênicas do cultivar estudado constitui eficiente meio para manipular a estrutura do dossel, possibilitando melhor alocação dos recursos produtivos no processo de crescimento e desenvolvimento da planta.The morphogenetic and structural characteristics of Panicum maximum cv. Mombaça were evaluated in response to different levels of nitrogen supply and cutting regimes. The study was conducted in a glasshouse with natural conditions of light and temperature
Prueksapanich, Piyapan; Pittayanon, Rapat; Rerknimitr, Rungsun; Wisedopas, Naruemon; Kullavanijaya, Pinit
2015-01-01
Background and study aims: Lugol’s chromoendoscopy provides excellent sensitivity for the detection of early esophageal squamous cell neoplasms (ESCN), but its specificity is suboptimal. An endoscopy technique for real-time histology is required to decrease the number of unnecessary biopsies. This study aimed to compare the ESCN diagnostic capability of probed-based confocal laser endomicroscopy (pCLE) and dual focus narrow-band imaging (dNBI) in Lugol’s voiding lesions. Patients and methods:...
Jiang Zhang
2012-01-01
Full Text Available The heterostructured TiO2/N-Bi2WO6 composites were prepared by a facile sol-gel-hydrothermal method. The phase structures, morphologies, and optical properties of the samples were characterized by using X-ray powder diffraction (XRD, scanning electron microscopy (SEM, high-resolution transmission electron microscopy (HRTEM, energy dispersive spectroscopy (EDS, and UV-vis diffuse reflectance spectroscopy. The photocatalytic activities for rhodamine B of the as-prepared products were measured under visible and ultraviolet light irradiation at room temperature. The TiO2/N-Bi2WO6 composites exhibited much higher photocatalytic performances than TiO2 as well as Bi2WO6. The enhancement in the visible light photocatalytic performance of the TiO2/N-Bi2WO6 composites could be attributed to the effective electron-hole separations at the interfaces of the two semiconductors, which facilitate the transfer of the photoinduced carriers.
Pukazhselvan, D; Perez, José; Nasani, Narendar; Bdikin, Igor; Kovalevsky, Andrei V; Fagg, Duncan Paul
2016-01-04
The present study aims to understand the catalysis of the MgH2 -Nb2 O5 hydrogen storage system. To clarify the chemical interaction between MgH2 and Nb2 O5 , the mechanochemical reaction products of a composite mixture of MgH2 +0.167 Nb2 O5 was monitored at different time intervals (2, 5, 15, 30, and 45 min, as well as 1, 2, 5, 10, 15, 20, 25, and 30 h). The study confirms the formation of catalytically active Nb-doped MgO nanoparticles (typically Mgx Nby Ox+y , with a crystallite size of 4-8 nm) by transforming reactants through an intermediate phase typified by Mgm-x Nb2n-y O5n-(x+y) . The initially formed Mgx Nby Ox+y product is shown to be Nb rich, with the concentration of Mg increasing upon increasing milling time. The nanoscale end-product Mgx Nby Ox+y closely resembles the crystallographic features of MgO, but with at least a 1-4 % higher unit cell volume. Unlike MgO, which is known to passivate the surfaces in MgH2 system, the Nb-dissolved MgO effectively mediates the Mg-H2 sorption reaction in the system. We believe that this observation will lead to new developments in the area of catalysis for metal-gas interactions.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Nakada, Masao; Okuno, Jun'ichi; Yokoyama, Yusuke
2016-02-01
Inference of globally averaged eustatic sea level (ESL) rise since the Last Glacial Maximum (LGM) highly depends on the interpretation of relative sea level (RSL) observations at Barbados and Bonaparte Gulf, Australia, which are sensitive to the viscosity structure of Earth's mantle. Here we examine the RSL changes at the LGM for Barbados and Bonaparte Gulf ({{RSL}}_{{L}}^{{{Bar}}} and {{RSL}}_{{L}}^{{{Bon}}}), differential RSL for both sites (Δ {{RSL}}_{{L}}^{{{Bar}},{{Bon}}}) and rate of change of degree-two harmonics of Earth's geopotential due to glacial isostatic adjustment (GIA) process (GIA-induced J˙2) to infer the ESL component and viscosity structure of Earth's mantle. Differential RSL, Δ {{RSL}}_{{L}}^{{{Bar}},{{Bon}}} and GIA-induced J˙2 are dominantly sensitive to the lower-mantle viscosity, and nearly insensitive to the upper-mantle rheological structure and GIA ice models with an ESL component of about (120-130) m. The comparison between the predicted and observationally derived Δ {{RSL}}_{{L}}^{{{Bar}},{{Bon}}} indicates the lower-mantle viscosity higher than ˜2 × 1022 Pa s, and the observationally derived GIA-induced J˙2 of -(6.0-6.5) × 10-11 yr-1 indicates two permissible solutions for the lower mantle, ˜1022 and (5-10) × 1022 Pa s. That is, the effective lower-mantle viscosity inferred from these two observational constraints is (5-10) × 1022 Pa s. The LGM RSL changes at both sites, {{RSL}}_{{L}}^{{{Bar}}} and {{RSL}}_{{L}}^{{{Bon}}}, are also sensitive to the ESL component and upper-mantle viscosity as well as the lower-mantle viscosity. The permissible upper-mantle viscosity increases with decreasing ESL component due to the sensitivity of the LGM sea level at Bonaparte Gulf ({{RSL}}_{{L}}^{{{Bon}}}) to the upper-mantle viscosity, and inferred upper-mantle viscosity for adopted lithospheric thicknesses of 65 and 100 km is (1-3) × 1020 Pa s for ESL˜130 m and (4-10) × 1020 Pa s for ESL˜125 m. The former solution of (1-3) × 1020
邵懿; 王君; 吴永宁
2014-01-01
目的：探讨我国食品中铅限量标准与国际接轨的程度，为我国食品中污染物限量标准完善提供参考。方法从标准涉及的食品类别和限量值两个方面来对比我国铅限量标准与国际食品法典委员会、欧盟、澳新制定的铅限量标准的异同。结果考虑到国际风险评估结果，我国基本对铅膳食暴露有贡献的食品都设置了限量值要求，因此我国标准涉及的食品种类要多于国际食品法典委员会、欧盟及澳新标准。但仍有个别食品的限量值较国际标准或其他国家标准宽松。结论建议加强对铅污染源头的治理，开展全国食品中铅污染情况的调研工作，为我国标准逐步完善打基础。%Objective To explore the extent of coincidence for lead concentration limits in food between China and Codex Alimentarius Commission (CAC) and provide evidence and reference for improving the Maximum Levels (MLs) of Contaminants in Foods. Methods Food categories and concentration limits for lead in China were compared with those of CAC, European Union, Australia and New Zealand. Results Con-sidering the international risk assessment result of lead, China almost set MLs for all the food that possibly contributes to the dietary exposure of lead, so the food categories for lead in China were more than those in CAC, the European Union and Australia and New Zealand standards. However, some MLs of lead in China are still looser than those in CAC or other countries. Conclusion The measures to control major contributing sources of lead in food and the comprehensive national survey of lead contamination in food should be taken, in order to lay the foundation for further improvement of the food contaminants standard in China.
Barberá Durbán, Rafael
2014-01-01
INTRODUCCIÓN: La mayoría de las neoplasias malignas de cabeza y cuello son carcinomas epidermoides originados en la vía aerodigestiva superior. Debido a su localización preferente en la superficie de la mucosa, la exploración endoscópica en la consulta es fundamental. Para mejorar la exploración visual endoscópica de las lesiones y tratar de aislarlas de la mucosa sana se ha desarrollado la técnica de exploración con luz de banda estrecha (Narrow Band Imaging o NBI). Inicialmente utilizada pa...
Polyp Segmentation in NBI Colonoscopy
Gross, Sebastian; Kennel, Manuel; Stehle, Thomas; Wulff, Jonas; Tischendorf, Jens; Trautwein, Christian; Aach, Til
Endoscopic screening of the colon (colonoscopy) is performed to prevent cancer and to support therapy. During intervention colon polyps are located, inspected and, if need be, removed by the investigator. We propose a segmentation algorithm as a part of an automatic polyp classification system for colonoscopic Narrow-Band images. Our approach includes multi-scale filtering for noise reduction, suppression of small blood vessels, and enhancement of major edges. Results of the subsequent edge detection are compared to a set of elliptic templates and evaluated. We validated our algorithm on our polyp database with images acquired during routine colonoscopic examinations. The presented results show the reliable segmentation performance of our method and its robustness to image variations.
王鹏飞; 陈兆峰; 王鹏斌; 魏丽娜; 王芳; 贠建尉; 刘子燕; 黄晓俊
2016-01-01
Objective To compare the value of NBI with magnify endoscopy (NBI-ME) and Lugol chromoendoscopy (LCE) in preoperative assessment of early esophageal cancer, and assess whether the former can replace the latter. Methods 59 patients, sampled in the Second Hospital of Lanzhou University, the First Hospital of Lanzhou University and the Second Hospital of Lanzhou City from January 2014 to December 2015, were examined respectively by NBI-ME and Lugol chromoendoscopy not only to distinguish the lesion boundaries but also predict the pathological types as well for statistical analysis with the combination of the final postoperative pathological results. Results Only 64.4 % (38/59) of lesion boundaries can be well-distinguished by NBI-ME, which is significantly lower than that distinguished by Lugol chromoendoscopy (91.5 %, 54/59), with its kappa value 0.208 0.05 (0.369), Kappa > 0.4 (0.429), P 0.05 (0.475), Kappa 0.05，Kappa=0.429>0.4，P =0.0000.05，Kappa=0.286<0.4，P =0.001<0.01，提示两者一致性较差。结论虽然 NBI-ME 在预判 EEC 病理分型方面与术后病理存在一定的一致性，且优于 LCE 的预判结果，但在清晰显示病变境界方面，LCE 仍有明显的优势。NBI-ME 尚不能取代 LCE。
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Guasp, J.; Liniers, M.
1995-07-01
The dependence with density and beam energy of the different kind of fast ion losses, direct and delayed, during tangential balanced NBI injection in TJ-II helical axis stellarator has been analysed. Direct losses increase with energy and a strong difference between the two injection directions appears, are produced by passing particles that loss confinement in a few {mu}sec and the influence of birth profiles produces an increase with density. Delayed losses are very well separated in time from direct ones, are produced by particles experimenting pitch angle scattering and, most o them, correspond to trapped particles. Are much less important than the direct ones (about 1/3), decrease slowly with energy and, with C X, increase with density (an effect of initial profile). The absorption is rather independent of energy with low values at low density in reason of high shine through and C X losses, but recovers quickly with the density increase. (Author) 4 refs.
Guasp, J.
1995-07-01
A numerical analysis of the impact patterns on the Vacuum Vessel produced by CX neutrals during the tangential balanced NBI in TJ-II Helical Axis Stellarator has been done. The results show periodical distributions with smooth maxima and mild loads, concentrated preferential on the HC plates. A certain preference of these neutral to emerge down wards from the plasma appears, as a consequence of a similar trend for the trapped particles. The differences between the impacts produced by the beam parallel to the magnetic field and the opposite one are small, once more as a consequence of the loss of memory of trapped particles to initial direction. The dependence of loads with plasma density and beam energy follows the trend of CX losses, decreasing strongly with increasing density and decreasing, more smoothly, with energy. (Author) 3 refs.
Preparation of Solid Solutions [Li3xLa0.67-xYyTi1-2yNbyO3] and Its Lithium-ion Conductivity
李荣华; 陈睿婷; 王文继
2002-01-01
Perovskite-type lithium fast ion conductors of Li3xLa0.67-xYyTi1-2yNbyO3 system were prepared by solid state reaction. X-Ray powder diffraction shows that a single-phase perovskite solid solution with orthorhombic structure forms in the ranges of x=0.10,y<0.075. Over this composition range the another phase, Y2O3 hexagonal phase is found. AC impedance measurements indicate that the bulk conductivities and the total conductivities are of the order of 10-4 S*cm-1 and 10-5 S*cm-1 at 25 ℃ respectively. The compositions have low bulk activation energies of about 20 kJ*mol-1 and total activation energies of about 40 kJ*mol-1 in the temperature range of 298～523 K, respectively.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
40 CFR 141.66 - Maximum contaminant levels for radionuclides.
2010-07-01
... of the Act, hereby identifies as indicated in the following table the best technology available for... waters. 10. Activated alumina (a), (h) Advanced All ground waters; competing anion concentrations...
Establishment of Maximum Voluntary Compressive Neck Tolerance Levels
2011-07-01
Bridges Casey Pirnstill Chris Burneka John Plaga Grant Roush Biosciences and Performance Division Vulnerability Analysis Branch July 2011...S) Michael Cote, John Buhrman, Nathaniel Bridges, Casey Pirnstill, Chris Burneka, John Plaga , Grant Roush 5d. PROJECT NUMBER OSMS 5e. TASK
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
Bittel, R.; Mancel, J. [Commissariat a l' Energie Atomique, 92 - Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires, departement de la protection sanitaire
1968-10-01
The most important carriers of radioactive contamination of man are the whole of foodstuffs and not only ingested water or inhaled air. That is the reason why, in accordance with the spirit of the recent recommendations of the ICRP, it is proposed to substitute the idea of maximum levels of contamination of water to the MPC. In the case of aquatic food chains (aquatic organisms and irrigated foodstuffs), the knowledge of the ingested quantities and of the concentration factors food/water permit to determinate these maximum levels, or to find out a linear relation between the maximum levels in the case of two primary carriers of contamination (continental and sea waters). The notion of critical food-consumption, critical radioelements and formula of waste disposal are considered in the same way, taking care to attach the greatest possible importance to local situations. (authors) [French] Les vecteurs essentiels de la contamination radioactive de l'homme sont les aliments dans leur ensemble, et non seulement l'eau ingeree ou l'air inhale. C'est pourquoi, en accord avec l'esprit des recentes recommandations de la C.I.P.R., il est propose de substituer aux CMA la notion de niveaux limites de contamination des eaux. Dans le cas des chaines alimentaires aquatiques (organismes aquatiques et aliments irrigues), la connaissance des quantites ingerees et celle des facteurs de concentration aliments/eau permettent de determiner ces niveaux limites dans le cas de deux vecteurs primaires de contamination (eaux continentales et eaux oceaniques). Les notions de regime alimentaire critique, de radioelement critique et de formule de rejets sont envisagees, dans le meme esprit, avec le souci de tenir compte le plus possible des situations locales. (auteurs)
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Filtering Additive Measurement Noise with Maximum Entropy in the Mean
Gzyl, Henryk
2007-01-01
The purpose of this note is to show how the method of maximum entropy in the mean (MEM) may be used to improve parametric estimation when the measurements are corrupted by large level of noise. The method is developed in the context on a concrete example: that of estimation of the parameter in an exponential distribution. We compare the performance of our method with the bayesian and maximum likelihood approaches.
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Solar Panel Maximum Power Point Tracker for Power Utilities
Sandeep Banik,
2014-01-01
Full Text Available ―Solar Panel Maximum Power Point Tracker For power utilities‖ As the name implied, it is a photovoltaic system that uses the photovoltaic array as a source of electrical power supply and since every photovoltaic (PV array has an optimum operating point, called the maximum power point, which varies depending on the insolation level and array voltage. A maximum power point tracker (MPPT is needed to operate the PV array at its maximum power point. The objective of this thesis project is to build a photovoltaic (PV array Of 121.6V DC Voltage(6 cell each 20V, 100watt And convert the DC voltage to Single phase 120v,50Hz AC voltage by switch mode power converter‘s and inverter‘s.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
李荣华; 许泽润; 王文继
2003-01-01
以LiTi2(PO4)3为母体,天然高岭石为起始原料,经高温固相反应制得了一系列新的锂快离子导体材料Li1+2xA1xScyNbyTi2-x-2ySixP3-xO12(以下简称Sc-Nb-Lisicon).X射线衍射分析表明x=0.1、x=0.2,y≤0.4;x=0.3,y≤0.25的组成范围内能得到类似于Nasicon的三方结构,即空间点群为R3C的合成物.应用交流阻抗技术测试其电导率,结果表明:x=0.1,y=0.3及x=0.2,y=0.15的合成物在室温下有较高的电导率,分别为1.05×10-4S/cm和2.78×10-4S/cm,二者在573K时的电导率可达8.00×10-3S/cm和7.82×10-3S/cm,同时在423～573K具有较低的活化能,分别为31.4kJ/mol和34.8kJ/mol.
The evolution of maximum body size of terrestrial mammals.
Smith, Felisa A; Boyer, Alison G; Brown, James H; Costa, Daniel P; Dayan, Tamar; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; McCain, Christy; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D
2010-11-26
The extinction of dinosaurs at the Cretaceous/Paleogene (K/Pg) boundary was the seminal event that opened the door for the subsequent diversification of terrestrial mammals. Our compilation of maximum body size at the ordinal level by sub-epoch shows a near-exponential increase after the K/Pg. On each continent, the maximum size of mammals leveled off after 40 million years ago and thereafter remained approximately constant. There was remarkable congruence in the rate, trajectory, and upper limit across continents, orders, and trophic guilds, despite differences in geological and climatic history, turnover of lineages, and ecological variation. Our analysis suggests that although the primary driver for the evolution of giant mammals was diversification to fill ecological niches, environmental temperature and land area may have ultimately constrained the maximum size achieved.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Han, X.; Zang, Q.; Xiao, S.; Wang, T.; Hu, A.; Tian, B.; Li, D.; Zhou, H.; Zhao, J.; Hsieh, C.; Li, M.; Yan, N.; Gong, X.; Hu, L.; Xu, G.; Gao, X.; the EAST Team
2017-04-01
The evolution characteristics of type-I ELMy high-confinement mode pedestal are examined in EAST based on the recently developed Thomson scattering system. The influence of the plasma current on pedestal evolvement has been confirmed experimentally. In the higher I p case (500 kA) the pedestal height shows an increase trend until the onset of next ELM and in the lower I p cases (300 and 400 kA), however, this buildup saturates at the first ∼30% of the ELM cycle. In contrast, the width increases only during the first ∼70% of the ELM cycle and then keeps almost stable in three I p cases, but resulting in different widening size of ∼1.5, 1 and 0.5 cm for 300, 400 and 500 kA respectively. Experimental results show that the pedestal pressure width has good correlation with poloidal beta as {{{Δ }}}{{p}{{e}},\\psi }=0.16\\sqrt{{{β }}{{p}{{o}}{{l}}}}, where the fitting coefficient 0.16 is not changed with different plasma currents but a little larger than that of other machines. For each current level, the pedestal density increases while the pedestal temperature decreases. But with increasing {I}{{p}} platforms, the pedestal height prior to the ELM onset shows a near quadratic (within error bars) increase. Experimental measurements demonstrate that the decrease of {{Δ }}{W}{{E}{{L}}{{M}}} with increasing {ν }{{p}{{e}}{{d}}}* comes mostly from the reduction of the plasma temperature drop, while the pedestal density height keeps relatively stable. Additional injection of LHW has been proved to modify the pedestal structure which should be responsible for the remaining scatter of the experimental data.
Simonin, A.; Achard, Jocelyn; Achkasov, K.; Bechu, S.; Baudouin, C.; Baulaigue, O.; Blondel, C.; Boeuf, J. P.; Bresteau, D.; Cartry, G.; Chaibi, W.; Drag, C.; de Esch, H. P. L.; Fiorucci, D.; Fubiani, G.; Furno, I.; Futtersack, R.; Garibaldi, P.; Gicquel, A.; Grand, C.; Guittienne, Ph.; Hagelaar, G.; Howling, A.; Jacquier, R.; Kirkpatrick, M. J.; Lemoine, D.; Lepetit, B.; Minea, T.; Odic, E.; Revel, A.; Soliman, B. A.; Teste, P.
2015-11-01
Since the signature of the ITER treaty in 2006, a new research programme targeting the emergence of a new generation of neutral beam (NB) system for the future fusion reactor (DEMO Tokamak) has been underway between several laboratories in Europe. The specifications required to operate a NB system on DEMO are very demanding: the system has to provide plasma heating, current drive and plasma control at a very high level of power (up to 150 MW) and energy (1 or 2 MeV), including high performances in term of wall-plug efficiency (η > 60%), high availability and reliability. To this aim, a novel NB concept based on the photodetachment of the energetic negative ion beam is under study. The keystone of this new concept is the achievement of a photoneutralizer where a high power photon flux (~3 MW) generated within a Fabry-Perot cavity will overlap, cross and partially photodetach the intense negative ion beam accelerated at high energy (1 or 2 MeV). The aspect ratio of the beam-line (source, accelerator, etc) is specifically designed to maximize the overlap of the photon beam with the ion beam. It is shown that such a photoneutralized based NB system would have the capability to provide several tens of MW of D0 per beam line with a wall-plug efficiency higher than 60%. A feasibility study of the concept has been launched between different laboratories to address the different physics aspects, i.e. negative ion source, plasma modelling, ion accelerator simulation, photoneutralization and high voltage holding under vacuum. The paper describes the present status of the project and the main achievements of the developments in laboratories.
Regina C. de M. Pires
2004-12-01
Full Text Available A temperatura do solo é um importante parâmetro no cultivo do morangueiro, pois interfere no desenvolvimento vegetativo, na sanidade e na produção. O objetivo do presente trabalho foi avaliar o efeito de diferentes níveis de água, coberturas de canteiro em campo aberto e em ambiente protegido, na temperatura máxima do solo no cultivo do morangueiro. Foram realizados dois experimentos: um em cultivo protegido e outro a campo aberto, em Atibaia - SP, em esquema fatorial 2 x 3 (coberturas do solo e níveis de irrigação, em blocos ao acaso, com cinco repetições. As coberturas de solo utilizadas foram filmes de polietileno preto e transparente. A irrigação localizada foi aplicada por gotejo sempre que o potencial de água no solo atingisse -0,010 (N1, -0,035 (N2 e -0,070 (N3 MPa, em tensiômetros instalados a 10 cm de profundidade. A temperatura do solo foi avaliada por termógrafos, sendo os sensores instalados a 5 cm de profundidade. Houve influência do ambiente de cultivo, da cobertura do solo e dos níveis de irrigação na temperatura máxima do solo. A temperatura do solo sob diferentes coberturas dependeu não somente das características físicas do plástico, como também da forma de instalação no canteiro. A temperatura máxima do solo aumentou com a diminuição do potencial da água no solo, no momento da irrigação.The soil temperature is an important parameter in strawberry crop, because, it interferes in vegetative development, plant health conditions and yield. The aim of this work was to evaluate the effect of different water levels, soil covers in field conditions and greenhouse in maximum soil temperature in strawberry crop. Two experiments were accomplished, one in greenhouse and other in field conditions, at Atibaia - SP, Brazil. The experimental design was a factorial 2 x 3 (soil covers and water levels, with 5 repetitions. The soil covers were clear and black plastics. The trickle irrigation was applied
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Development of a compact bushing for NBI
de Esch, H. P. L.; Simonin, A.; Grand, C.; Lepetit, B.; Lemoine, D.; Márquez-Mijares, M.; Minea, T.; Caillault, L.; Seznec, B.; Jager, T.; Odic, E.; Kirkpatrick, M. J.; Teste, Ph.; Dessante, Ph.; Almaksour, K.
2017-08-01
Research into a novel type of compact bushing is being conducted through the HVIV (High Voltage holding In Vacuum) partnership between CEA-Cadarache1, GeePs-Centralesupélec4, LPGP3 and LCAR2. The bushing aims to concentrate the high electric field inside its interior, rather than in the vacuum tank. Hence the field emission current is also concentrated inside the bushing and it can be attempted to suppress this so-called dark current by conditioning the internal surfaces and by adding gas. LCAR have performed theoretical quantum mechanical studies of electron field emission and the role of adsorbates in changing the work function. LPGP studied the ionization of gas due to field emission current and the behavior of micro particles exposed to emissive electron current in the vacuum gap under high electric fields. Experiments at Geeps have clarified the role of surface conditioning in reducing the dark current. Geeps also found that adding low pressure nitrogen gas to the vacuum is much more effective than helium in reducing the field emission. An interesting observation is the growth of carbon structures after exposure of an electrode to the electric field. Finally, IRFM have performed experiments on a single stage test bushing that features a 36 cm high porcelain insulator and two cylindrical electrode surfaces in vacuum or low-pressure gas. Using 0.1 Pa N2 gas, the voltage holding exceeded 185 kV over a 40 mm "vacuum" gap without dark current. Above this voltage, exterior breakdowns occurred over the insulator, which was in air. The project will finish with the fabrication of a 2-stage compact bushing, capable to withstand 400 kV.
Quality, precision and accuracy of the maximum No. 40 anemometer
Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)
1996-12-31
This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)
2009-02-19
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Maximum orbit plane change with heat-transfer-rate considerations
Lee, J. Y.; Hull, D. G.
1990-01-01
Two aerodynamic maneuvers are considered for maximizing the plane change of a circular orbit: gliding flight with a maximum thrust segment to regain lost energy (aeroglide) and constant altitude cruise with the thrust being used to cancel the drag and maintain a high energy level (aerocruise). In both cases, the stagnation heating rate is limited. For aeroglide, the controls are the angle of attack, the bank angle, the time at which the burn begins, and the length of the burn. For aerocruise, the maneuver is divided into three segments: descent, cruise, and ascent. During descent the thrust is zero, and the controls are the angle of attack and the bank angle. During cruise, the only control is the assumed-constant angle of attack. During ascent, a maximum thrust segment is used to restore lost energy, and the controls are the angle of attack and bank angle. The optimization problems are solved with a nonlinear programming code known as GRG2. Numerical results for the Maneuverable Re-entry Research Vehicle with a heating-rate limit of 100 Btu/ft(2)-s show that aerocruise gives a maximum plane change of 2 deg, which is only 1 deg larger than that of aeroglide. On the other hand, even though aerocruise requires two thrust levels, the cruise characteristics of constant altitude, velocity, thrust, and angle of attack are easy to control.
Beneval Rosa
2008-10-01
Full Text Available
No presente trabalho avaliaram-se diferentes combinações de doses de nitrogênio e de fósforo na produção de massa seca foliar, número de perfilhos e expansão da área foliar de Panicum maximum Jacq. cv. Mombaça. Para tanto, desenvolveu-se um experimento em casa de vegetação na Escola de Agronomia e Engenharia de Alimentos da Universidade Federal de Goiás. Fez-se uso, como substrato de crescimento, de 7,0 dm^{3} de terra, acondicionado em vaso plástico de 9,0 dm^{3} proveniente de um Latossolo Vermelho-escuro distrófico coletado na Fazenda Samambaia no município de Goiânia, GO. Os tratamentos se constituíram da aplicação de quatro doses de nitrogênio na forma de uréia (0, 100, 200 e 400 mg/dm^{3 }de N e quatro doses de fósforo na forma de superfosfato triplo (0, 250, 500 e 750 mg/dm^{3} de P em quatro repetições. Parcelaram-se as doses de nitrogênio em três aplicações com intervalos de dez dias para cada corte de avaliação. O delineamento experimental foi o inteiramente casualizado com os tratamentos arranjados em um fatorial completo 2^{4} , sendo os fatores as doses de nitrogênio e fósforo. Aos 60 dias após a emergência, efetuou-se um corte de uniformização a 20 cm de altura do solo. Para fins de avaliações, realizaram-se mais três cortes (20 cm de altura a cada trinta dias. Concluiu-se que doses de nitrogênio entre 300 e 400 mg/dm^{3} de N combinadas com doses de fósforo entre 250 e 500 mg/dm^{3} de P são as mais indicadas para trabalhos em casa de vegetação com o capim-Mombaça.
PALAVRAS-CHAVES: Área foliar, capim-Mombaça, massa seca foliar, perfilhos, planta C_{4}.
The aim of this study was to evaluate the effects of different combinations of nitrogen and phosphorous levels on leaf area, leaf dry matter and tillers number of Panicum maximum Jacq. Cv$\\ell_0$-penalized maximum likelihood for sparse directed acyclic graphs
van de Geer, Sara
2012-01-01
We consider the problem of regularized maximum likelihood estimation for the structure and parameters of a high-dimensional, sparse directed acyclic graphical (DAG) model with Gaussian distribution, or equivalently, of a Gaussian structural equation model. We show that the $\\ell_0$-penalized maximum likelihood estimator of a DAG has about the same number of edges as the minimal-edge I-MAP (a DAG with minimal number of edges representing the distribution), and that it converges in Frobenius norm. We allow the number of nodes $p$ to be much larger than sample size $n$ but assume a sparsity condition and that any representation of the true DAG has at least a fixed proportion of its non-zero edge weights above the noise level. Our results do not rely on the restrictive strong faithfulness condition which is required for methods based on conditional independence testing such as the PC-algorithm.
LITERATURE REVIEW ON MAXIMUM LOADING OF RADIONUCLIDES ON CRYSTALLINE SILICOTITANATE
Adu-Wusu, K.; Pennebaker, F.
2010-10-13
Plans are underway to use small column ion exchange (SCIX) units installed in high-level waste tanks to remove Cs-137 from highly alkaline salt solutions at Savannah River Site. The ion exchange material slated for the SCIX project is engineered or granular crystalline silicotitanate (CST). Information on the maximum loading of radionuclides on CST is needed by Savannah River Remediation for safety evaluations. A literature review has been conducted that culminated in the estimation of the maximum loading of all but one of the radionuclides of interest (Cs-137, Sr-90, Ba-137m, Pu-238, Pu-239, Pu-240, Pu-241, Am-241, and Cm-244). No data was found for Cm-244.
Probabilistic maximum-value wind prediction for offshore environments
Staid, Andrea; Pinson, Pierre; Guikema, Seth D.
2015-01-01
, and probabilistic forecasts result in greater value to the end-user. The models outperform traditional baseline forecast methods and achieve low predictive errors on the order of 1–2 m s−1. We show the results of their predictive accuracy for different lead times and different training methodologies....... statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...
The maximum intelligible range of the human voice
Boren, Braxton
This dissertation examines the acoustics of the spoken voice at high levels and the maximum number of people that could hear such a voice unamplified in the open air. In particular, it examines an early auditory experiment by Benjamin Franklin which sought to determine the maximum intelligible crowd for the Anglican preacher George Whitefield in the eighteenth century. Using Franklin's description of the experiment and a noise source on Front Street, the geometry and diffraction effects of such a noise source are examined to more precisely pinpoint Franklin's position when Whitefield's voice ceased to be intelligible. Based on historical maps, drawings, and prints, the geometry and material of Market Street is constructed as a computer model which is then used to construct an acoustic cone tracing model. Based on minimal values of the Speech Transmission Index (STI) at Franklin's position, Whitefield's on-axis Sound Pressure Level (SPL) at 1 m is determined, leading to estimates centering around 90 dBA. Recordings are carried out on trained actors and singers to determine their maximum time-averaged SPL at 1 m. This suggests that the greatest average SPL achievable by the human voice is 90-91 dBA, similar to the median estimates for Whitefield's voice. The sites of Whitefield's largest crowds are acoustically modeled based on historical evidence and maps. Based on Whitefield's SPL, the minimal STI value, and the crowd's background noise, this allows a prediction of the minimally intelligible area for each site. These yield maximum crowd estimates of 50,000 under ideal conditions, while crowds of 20,000 to 30,000 seem more reasonable when the crowd was reasonably quiet and Whitefield's voice was near 90 dBA.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Maximum capacities of the 100-B water plant
Strand, N.O.
1953-04-27
Increases in process water flows will be needed as the current program of increasing pile power levels continues. The future process water flows that will be required are known to be beyond the present maximum capacities of component parts of the water system. It is desirable to determine the present maximum capacities of each major component part so that plans can be mode for modifications and/or additions to the present equipment to meet future required flows. The apparent hydraulic limit of the present piles is about 68,000 gpm. This figure is based on a tube inlet pressure of 400 psi, a tube flow of 34 gpm, and 2,000 effective tubes. In this document the results of tests and calculations to determine the present maximum capacities of each major component part of the 100-B water system will be presented. Emergency steam operated pumps will not be considered as it is doubtful of year around operation of a steam driven pump could be economically justified. Some possible ways to increase the process water flows of each component part of the water system to the ultimate of 68,000 gpm are given.
Mapping the MPM maximum flow algorithm on GPUs
Solomon, Steven; Thulasiraman, Parimala
2010-11-01
The GPU offers a high degree of parallelism and computational power that developers can exploit for general purpose parallel applications. As a result, a significant level of interest has been directed towards GPUs in recent years. Regular applications, however, have traditionally been the focus of work on the GPU. Only very recently has there been a growing number of works exploring the potential of irregular applications on the GPU. We present a work that investigates the feasibility of Malhotra, Pramodh Kumar and Maheshwari's "MPM" maximum flow algorithm on the GPU that achieves an average speedup of 8 when compared to a sequential CPU implementation.
陈嘉; 李拥军; 杨文萍
2009-01-01
Objective To study the effects of carbon disulfide exposure within the national maximum allowable concentration(MAC) on blood pressure and electrocardiogram, and associations with selected factors. Methods Workers in a chemical fiber factory were divided into two groups based on the type of work: a high exposure group (HEG) of 821 individuals and a low exposure group (LEG) of 259. The CS_2 concentration at workplace was controlled under the national MAC. A set of 250 randomly selected people taking routine phys-ical check-ups in the same period and hospital constituted the control group. The systolic blood pressure (SBP) and diastolic hlood pressure (DBP) were measured on the arm, and the pulse pressure (PP) and mean arterial blood pressure (MABP) were calculated based on SBP and DBP. The blood pressure data, along with the results of the routine 12-lead electrocardiography taken at rest and records on gender, age, years of work, type of work, and concentrations of triglycerol, cholesterol, and glucose in blood, were compiled for analyses. Risk factors upon CS_2 exposure for the increase of blood pressure and occurrence of electrocardiogram abnor-malities were identified and rationalized. Results Significant difference (P<0.01) in the average values of SBP, DBP, MABP, and the corresponding abnormality incident rates was found between HEG and LEG, and between HEG and the control group. For both HEG and LEG, the incident rate of DBP abnormality(high DBP) is nearly two times as high as that of SBP. Type of work is the largest risk factor in both the high SBP and high DBP subgroups, with odds ratios (OR) of 2.086 and 2.331 respectively, and high CS_2 exposure presents more than double the risk than low exposure. On the incident rate of ECG abnormalities, beth exposure groups are significantly different (P<0.01) to the control group. High SBP in LEG and high DBP in HEG were found to be significant risk factors (OR = 3.531 and 1.638 respectively), while blood glucose
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Higgs, Roger
2017-04-01
The 255 authors of IPCC's "Climate Change 2013: The Physical Science Basis" include no sedimentary geologists, specialists in ever-changing sea level (SL). According to IPCC the 0.3m SL rise(1) since tide-gauge records began (c.1700CE, Little Ice Age[LIA] acme) is unprecedented in >2ky, implicating mankind's CO2 emissions. On the contrary, a c.5m SL rise and fall between c.400CE and 1700 are indicated independently by three lines of evidence: British archaeology(2,3); worldwide raised-shoreline benchmarks(4); and Red Sea foraminifera O18 fluctuations(5). The c.5m fall is attributable to 590-1640CE cooling (ice growth) shown by a global proxy temperature graph(6; cf.7). This 1ky-long cooling and ensuing 1850-2017 warming, both sawtooth-style, in turn mimic a 1ky solar decline then rise(8), moreso after aligning the 590CE peak temperature(6) with the c.525CE solar "Grand maximum" (GM) or near-GM(8). This 65y lag reflects hitherto-neglected ocean-conveyor-belt circulation, i.e. downwelling Atlantic surface water, variably solar-warmed (depending on solar-governed cloudiness[9]), upwells decades later beside Antarctica, returning northward to affect continental air temperatures. The conveyor slowed in the LIA (c.150y offset between 1280-1700CE cluster of solar Grand minima[8] and 1430-1850 cool phase[6]). Lately the lag, obvious from visual cross-matching of 1850-2012 instrumental-temperature peaks and troughs(10) versus the 1700-2016 sunspot chart (Google images), is c.85y (1890 solar trough matches 1975 temperature trough). Similarly, SL(1) clearly lags temperature(10) by 15y (1964 and 1976 temperature troughs match 1979 and 1991 SL troughs). Thus the total SL-solar lag is 100y (85+15). Appreciating the 85y and 100y lags enables vital predictions: sunspots increased (sawtooth-style) from c.1890 until the 1958 GM (the only definite GM in >2ky[8]), therefore ongoing warming will peak c.2043 (1958+85), and SL c.2058. How high will SL rise? The 1958 solar GM exceeded (95
A maximum entropy model for opinions in social groups
Davis, Sergio; Navarrete, Yasmín; Gutiérrez, Gonzalo
2014-04-01
We study how the opinions of a group of individuals determine their spatial distribution and connectivity, through an agent-based model. The interaction between agents is described by a Hamiltonian in which agents are allowed to move freely without an underlying lattice (the average network topology connecting them is determined from the parameters). This kind of model was derived using maximum entropy statistical inference under fixed expectation values of certain probabilities that (we propose) are relevant to social organization. Control parameters emerge as Lagrange multipliers of the maximum entropy problem, and they can be associated with the level of consequence between the personal beliefs and external opinions, and the tendency to socialize with peers of similar or opposing views. These parameters define a phase diagram for the social system, which we studied using Monte Carlo Metropolis simulations. Our model presents both first and second-order phase transitions, depending on the ratio between the internal consequence and the interaction with others. We have found a critical value for the level of internal consequence, below which the personal beliefs of the agents seem to be irrelevant.
Predicting the solar maximum with the rising rate
Du, Z L
2011-01-01
The growth rate of solar activity in the early phase of a solar cycle has been known to be well correlated with the subsequent amplitude (solar maximum). It provides very useful information for a new solar cycle as its variation reflects the temporal evolution of the dynamic process of solar magnetic activities from the initial phase to the peak phase of the cycle. The correlation coefficient between the solar maximum (Rmax) and the rising rate ({\\beta}a) at {\\Delta}m months after the solar minimum (Rmin) is studied and shown to increase as the cycle progresses with an inflection point (r = 0.83) at about {\\Delta}m = 20 months. The prediction error of Rmax based on {\\beta}a is found within estimation at the 90% level of confidence and the relative prediction error will be less than 20% when {\\Delta}m \\geq 20. From the above relationship, the current cycle (24) is preliminarily predicted to peak around October 2013 with a size of Rmax =84 \\pm 33 at the 90% level of confidence.
Optimization of agitation and aeration conditions for maximum virginiamycin production.
Shioya, S; Morikawa, M; Kajihara, Y; Shimizu, H
1999-02-01
To maximize the productivity of virginiamycin, which is a commercially important antibiotic as an animal feed additive, an empirical approach was employed in the batch culture of Streptomyces virginiae. Here, the effects of dissolved oxygen (DO) concentration and agitation speed on the maximum cell concentration at the production phase, as well as on the productivity of virginiamycin, were investigated. To maintain the DO concentration in the fermentor at a certain level, either the agitation speed or the inlet oxygen concentration of the supply gas was manipulated. It was found that increasing the agitation speed had a positive effect on the antibiotic productivity independent of the DO concentration. The optimum DO concentration, agitation speed and addition of an autoregulator, virginiae butanolide C (VB-C), were determined to maximize virginiamycin productivity. The optimal strategy was to start the cultivation at 450 rpm and to continue until the DO concentration reached 80%. After reaching 80%, the DO concentration was maintained at this level by changing the agitation speed, up to a maximum of 800 rpm. The addition of an optimal amount of the autoregulator VB-C in an experiment resulted in the maximal production of virginiamycin M (399 mg/l), which was about 1.8-fold those obtained previously.
Kordheili, Reza Ahmadi; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna
2014-01-01
High penetration of photovoltaic panels in distribution grid can bring the grid to its operation limits. The main focus of the paper is to determine maximum photovoltaic penetration level in the grid. Three main criteria were investigated for determining maximum penetration level of PV panels...... for this grid: even distribution of PV panels, aggregation of panels at the beginning of each feeder, and aggregation of panels at the end of each feeder. Load modeling is done using Velander formula. Since PV generation is highest in the summer due to irradiation, a summer day was chosen to determine maximum...
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Maximum entropy algorithm and its implementation for the neutral beam profile measurement
Lee, Seung Wook; Cho, Gyu Seong [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Cho, Yong Sub [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1997-12-31
A tomography algorithm to maximize the entropy of image using Lagrangian multiplier technique and conjugate gradient method has been designed for the measurement of 2D spatial distribution of intense neutral beams of KSTAR NBI (Korea Superconducting Tokamak Advanced Research Neutral Beam Injector), which is now being designed. A possible detection system was assumed and a numerical simulation has been implemented to test the reconstruction quality of given beam profiles. This algorithm has the good applicability for sparse projection data and thus, can be used for the neutral beam tomography. 8 refs., 3 figs. (Author)
Maximum holding endurance time: Effects of load and load's center of gravity height.
Lee, Tzu-Hsien
2015-01-01
Manual holding task is a potential risk to the development of musculoskeletal injuries since it is prone to induce localized muscle fatigue. Maximum holding endurance time is a significant parameter for the design of manual holding task. This study aimed to examine the effects of load and load's COG height on maximum holding endurance time. Fifteen young and healthy males were recruited as participants. A factorial design was used to examine the effects of load and load's COG height on maximum holding endurance time. Four levels of load (15% , 30% , 45% and 60% of the participant's maximum holding capacity) and two levels of load's COG height in box (0 cm and 40 cm high from the handle position) were examined. Maximum holding endurance time decreased with increasing load and/or increasing load's COG height. The effect of load's COG height on maximum holding endurance time decreased with increasing load. Load, load's COG height, and the interaction of load and load's COG height significantly affected maximum holding endurance time. Practitioners should realize the effects of load, load's COG height, and the interaction of load and load's COG height on maximum holding endurance time when setting the working conditions of holding tasks.
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Night vision image fusion for target detection with improved 2D maximum entropy segmentation
Bai, Lian-fa; Liu, Ying-bin; Yue, Jiang; Zhang, Yi
2013-08-01
Infrared and LLL image are used for night vision target detection. In allusion to the characteristics of night vision imaging and lack of traditional detection algorithm for segmentation and extraction of targets, we propose a method of infrared and LLL image fusion for target detection with improved 2D maximum entropy segmentation. Firstly, two-dimensional histogram was improved by gray level and maximum gray level in weighted area, weights were selected to calculate the maximum entropy for infrared and LLL image segmentation by using the histogram. Compared with the traditional maximum entropy segmentation, the algorithm had significant effect in target detection, and the functions of background suppression and target extraction. And then, the validity of multi-dimensional characteristics AND operation on the infrared and LLL image feature level fusion for target detection is verified. Experimental results show that detection algorithm has a relatively good effect and application in target detection and multiple targets detection in complex background.
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Maximum Velocities in Flexion and Extension Actions for Sport
Jessop David M.
2016-04-01
Full Text Available Speed of movement is fundamental to the outcome of many human actions. A variety of techniques can be implemented in order to maximise movement speed depending on the goal of the movement, constraints, and the time available. Knowing maximum movement velocities is therefore useful for developing movement strategies but also as input into muscle models. The aim of this study was to determine maximum flexion and extension velocities about the major joints in upper and lower limbs. Seven university to international level male competitors performed flexion/extension at each of the major joints in the upper and lower limbs under three conditions: isolated; isolated with a countermovement; involvement of proximal segments. 500 Hz planar high speed video was used to calculate velocities. The highest angular velocities in the upper and lower limb were 50.0 rad·s-1 and 28.4 rad·s-1, at the wrist and knee, respectively. As was true for most joints, these were achieved with the involvement of proximal segments, however, ANOVA analysis showed few significant differences (p<0.05 between conditions. Different segment masses, structures and locations produced differing results, in the upper and lower limbs, highlighting the requirement of segment specific strategies for maximal movements.
Noise and physical limits to maximum resolution of PET images
Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2007-10-01
In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Maximum entropy models of ecosystem functioning
Bertram, Jason, E-mail: jason.bertram@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)
2014-12-05
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Maximum entropy models of ecosystem functioning
Bertram, Jason
2014-12-01
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes' broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Application of Maximum Entropy Deconvolution to ${\\gamma}$-ray Skymaps
Raab, Susanne
2015-01-01
Skymaps measured with imaging atmospheric Cherenkov telescopes (IACTs) represent the real source distribution convolved with the point spread function of the observing instrument. Current IACTs have an angular resolution in the order of 0.1$^\\circ$ which is rather large for the study of morphological structures and for comparing the morphology in $\\gamma$-rays to measurements in other wavelengths where the instruments have better angular resolutions. Serendipitously it is possible to approximate the underlying true source distribution by applying a deconvolution algorithm to the observed skymap, thus effectively improving the instruments angular resolution. From the multitude of existing deconvolution algorithms several are already used in astronomy, but in the special case of $\\gamma$-ray astronomy most of these algorithms are challenged due to the high noise level within the measured data. One promising algorithm for the application to $\\gamma$-ray data is the Maximum Entropy Algorithm. The advantages of th...
Adaptive edge image enhancement based on maximum fuzzy entropy
ZHANG Xiu-hua; YANG Kun-tao
2006-01-01
Based on the maximum fuzzy entropy principle,the edge image with low contrast is optimally classified into two classes adaptively,under the condition of probability partition and fuzzy partition.The optimal threshold is used as the classified threshold value,and a local parametric gray-level transformation is applied to the obtained classes.By means of two parameters representing,the homogeneity of the regions in edge image is improved.The excellent performance of the proposed technique is exercisable through simulation results on a set of test images.It is shown how the extracted and enhanced edges provide an efficient edge-representation of images.It is shown that the proposed technique possesses excellent performance in homogeneity through simulations on a set of test images,and the extracted and enhanced edges provide an efficient edge-representation of images.
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
Maximum daily rainfall in South Korea
Saralees Nadarajah; Dongseok Choi
2007-08-01
Annual maxima of daily rainfall for the years 1961–2001 are modeled for five locations in South Korea (chosen to give a good geographical representation of the country). The generalized extreme value distribution is fitted to data from each location to describe the extremes of rainfall and to predict its future behavior. We find evidence to suggest that the Gumbel distribution provides the most reasonable model for four of the five locations considered. We explore the possibility of trends in the data but find no evidence suggesting trends. We derive estimates of 10, 50, 100, 1000, 5000, 10,000, 50,000 and 100,000 year return levels for daily rainfall and describe how they vary with the locations. This paper provides the first application of extreme value distributions to rainfall data from South Korea.
Maximum permissible concentrations of uranium in air
Adams, N
1973-01-01
The retention of uranium by bone and kidney has been re-evaluated taking account of recently published data for a man who had been occupationally exposed to natural uranium aerosols and for adults who had ingested uranium at the normal dietary levels. For life-time occupational exposure to uranium aerosols the new retention functions yield a greater retention in bone and a smaller retention in kidney than the earlier ones, which were based on acute intakes of uranium by terminal patients. Hence bone replaces kidney as the critical organ. The (MPC) sub a for uranium 238 on radiological considerations using the current (1959) ICRP lung model for the new retention functions is slightly smaller than for earlier functions but the (MPC) sub a determined by chemical toxicity remains the most restrictive.
The relationship between the Guinea Highlands and the West African offshore rainfall maximum
Hamilton, H. L.; Young, G. S.; Evans, J. L.; Fuentes, J. D.; Núñez Ocasio, K. M.
2017-01-01
Satellite rainfall estimates reveal a consistent rainfall maximum off the West African coast during the monsoon season. An analysis of 16 years of rainfall in the monsoon season is conducted to explore the drivers of such copious amounts of rainfall. Composites of daily rainfall and midlevel meridional winds centered on the days with maximum rainfall show that the day with the heaviest rainfall follows the strongest midlevel northerlies but coincides with peak low-level moisture convergence. Rain type composites show that convective rain dominates the study region. The dominant contribution to the offshore rainfall maximum is convective development driven by the enhancement of upslope winds near the Guinea Highlands. The enhancement in the upslope flow is closely related to African easterly waves propagating off the continent that generate low-level cyclonic vorticity and convergence. Numerical simulations reproduce the observed rainfall maximum and indicate that it weakens if the African topography is reduced.
Accurate Maximum Power Tracking in Photovoltaic Systems Affected by Partial Shading
Pierluigi Guerriero
2015-01-01
Full Text Available A maximum power tracking algorithm exploiting operating point information gained on individual solar panels is presented. The proposed algorithm recognizes the presence of multiple local maxima in the power voltage curve of a shaded solar field and evaluates the coordinated of the absolute maximum. The effectiveness of the proposed approach is evidenced by means of circuit level simulation and experimental results. Experiments evidenced that, in comparison with a standard perturb and observe algorithm, we achieve faster convergence in normal operating conditions (when the solar field is uniformly illuminated and we accurately locate the absolute maximum power point in partial shading conditions, thus avoiding the convergence on local maxima.
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
连惠婷; 李荣华; 王文继
2004-01-01
以Li3xLa0.67-x TiO3为母体,通过掺杂经高温固相反应制得了一系列新的锂快离子导体材料Li3xLa0.67-xAlyTi1-2yNbyO3.X-射线衍射分析表明该系统(以下分别简称为Al-Nb)在x=0.10,y＜0.050时,合成物为单一的钙钛矿固溶体,而当x=0.10,y≥0.050时,系统合成物还存在Al203杂相;应用交流阻抗技术测试其电导率,结果表明:该体系合成物在室温下有较高的电导率,可达2.52×10-4S/cm,在523 K时最高电导率可达4.76×10-3S/cm.活化能在20～25 kJ/mol之间,稳定性提高.
Rayleigh-maximum-likelihood bilateral filter for ultrasound image enhancement.
Li, Haiyan; Wu, Jun; Miao, Aimin; Yu, Pengfei; Chen, Jianhua; Zhang, Yufeng
2017-04-17
Ultrasound imaging plays an important role in computer diagnosis since it is non-invasive and cost-effective. However, ultrasound images are inevitably contaminated by noise and speckle during acquisition. Noise and speckle directly impact the physician to interpret the images and decrease the accuracy in clinical diagnosis. Denoising method is an important component to enhance the quality of ultrasound images; however, several limitations discourage the results because current denoising methods can remove noise while ignoring the statistical characteristics of speckle and thus undermining the effectiveness of despeckling, or vice versa. In addition, most existing algorithms do not identify noise, speckle or edge before removing noise or speckle, and thus they reduce noise and speckle while blurring edge details. Therefore, it is a challenging issue for the traditional methods to effectively remove noise and speckle in ultrasound images while preserving edge details. To overcome the above-mentioned limitations, a novel method, called Rayleigh-maximum-likelihood switching bilateral filter (RSBF) is proposed to enhance ultrasound images by two steps: noise, speckle and edge detection followed by filtering. Firstly, a sorted quadrant median vector scheme is utilized to calculate the reference median in a filtering window in comparison with the central pixel to classify the target pixel as noise, speckle or noise-free. Subsequently, the noise is removed by a bilateral filter and the speckle is suppressed by a Rayleigh-maximum-likelihood filter while the noise-free pixels are kept unchanged. To quantitatively evaluate the performance of the proposed method, synthetic ultrasound images contaminated by speckle are simulated by using the speckle model that is subjected to Rayleigh distribution. Thereafter, the corrupted synthetic images are generated by the original image multiplied with the Rayleigh distributed speckle of various signal to noise ratio (SNR) levels and
Present and Last Glacial Maximum climates as states of maximum entropy production
Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere
2011-01-01
The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...
LANDFILL OPERATION FOR CARBON SEQUESTRATION AND MAXIMUM METHANE EMISSION CONTROL
Don Augenstein
2001-02-01
The work described in this report, to demonstrate and advance this technology, has used two demonstration-scale cells of size (8000 metric tons [tonnes]), sufficient to replicate many heat and compaction characteristics of larger ''full-scale'' landfills. An enhanced demonstration cell has received moisture supplementation to field capacity. This is the maximum moisture waste can hold while still limiting liquid drainage rate to minimal and safely manageable levels. The enhanced landfill module was compared to a parallel control landfill module receiving no moisture additions. Gas recovery has continued for a period of over 4 years. It is quite encouraging that the enhanced cell methane recovery has been close to 10-fold that experienced with conventional landfills. This is the highest methane recovery rate per unit waste, and thus progress toward stabilization, documented anywhere for such a large waste mass. This high recovery rate is attributed to moisture, and elevated temperature attained inexpensively during startup. Economic analyses performed under Phase I of this NETL contract indicate ''greenhouse cost effectiveness'' to be excellent. Other benefits include substantial waste volume loss (over 30%) which translates to extended landfill life. Other environmental benefits include rapidly improved quality and stabilization (lowered pollutant levels) in liquid leachate which drains from the waste.
王雪丽; 陶剑; 史宁中
2005-01-01
The primary goal of a phase I clinical trial is to find the maximum tolerable dose of a treatment. In this paper, we propose a new stepwise method based on confidence bound and information incorporation to determine the maximum tolerable dose among given dose levels. On the one hand, in order to avoid severe even fatal toxicity to occur and reduce the experimental subjects, the new method is executed from the lowest dose level, and then goes on in a stepwise fashion. On the other hand,in order to improve the accuracy of the recommendation, the final recommendation of the maximum tolerable dose is accomplished through the information incorporation of an additional experimental cohort at the same dose level. Furthermore, empirical simulation results show that the new method has some real advantages in comparison with the modified continual reassessment method.
Berna, C.; Escriva, A.; Munoz-Cobo, J. L.; Posada, J. M.
2014-07-01
The work consists in the simulation of code TRACE v5.0 p3 of the transient in turbine trip from highest level of power without reactor trip. In particular, a steady state with conditions very similar to the of the previous simulation made using the RELAP-MOD3 code has been obtained. In the transient, has been also satisfactory results, specifically the values of pressures, temperatures and mass flows, both in the secondary and primary circuit flow, are also very similar in both cases. In conclusion, have shown the ability to play the transition in study by the TRILLO plant using the code TRACE v5.0 p3 model, constituting a step in the process of verification of such a code. (Author)
Maximum Range of a Projectile Thrown from Constant-Speed Circular Motion
Poljak, Nikola
2016-11-01
The problem of determining the angle θ at which a point mass launched from ground level with a given speed v0 will reach a maximum distance is a standard exercise in mechanics. There are many possible ways of solving this problem, leading to the well-known answer of θ = π/4, producing a maximum range of D max = v0 2 / g , with g being the free-fall acceleration. Conceptually and calculationally more difficult problems have been suggested to improve student proficiency in projectile motion, with the most famous example being the Tarzan swing problem. The problem of determining the maximum distance of a point mass thrown from constant-speed circular motion is presented and analyzed in detail in this text. The calculational results confirm several conceptually derived conclusions regarding the initial throw position and provide some details on the angles and the way of throwing (underhand or overhand) that produce the maximum throw distance.
Y. Labbi
2015-08-01
Full Text Available Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency.In this work, a Particle Swarm Optimization (PSO is proposed for maximum power point tracker for photovoltaic panel, are used to generate the optimal MPP, such that solar panel maximum power is generated under different operating conditions. A photovoltaic system including a solar panel and PSO MPP tracker is modelled and simulated, it has been has been carried out which has shown the effectiveness of PSO to draw much energy and fast response against change in working conditions.
Investigation on the Maximum Power Point in Solar Panel Characteristics Due to Irradiance Changes
Abdullah, M. A.; Fauziah Toha, Siti; Ahmad, Salmiah
2017-03-01
One of the disadvantages of the photovoltaic module as compared to other renewable resources is the dynamic characteristics of solar irradiance due to inconsistency weather condition and surrounding temperature. Commonly, a photovoltaic power generation systems consist of an embedded control system to maximize the power generation due to the inconsistency in irradiance. In order to improve the simplicity of the power optimization control, this paper present the characteristic of Maximum Power Point with various irradiance levels for Maximum Power Point Tracking (MPPT). The technique requires a set of data from photovoltaic simulation model to be extrapolated as a standard relationship between irradiance and maximum power. The result shows that the relationship between irradiance and maximum power can be represented by a simplified quadratic equation. The first section in your paper
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
Blind Joint Maximum Likelihood Channel Estimation and Data Detection for SIMO Systems
Sheng Chen; Xiao-Chen Yang; Lei Chen; Lajos Hanzo
2007-01-01
A blind adaptive scheme is proposed for joint maximum likelihood (ML) channel estimation and data detection of singleinput multiple-output (SIMO) systems. The joint ML optimisation over channel and data is decomposed into an iterative optimisation loop. An efficient global optimisation algorithm called the repeated weighted boosting search is employed at the upper level to optimally identify the unknown SIMO channel model, and the Viterbi algorithm is used at the lower level to produce the maximum likelihood sequence estimation of the unknown data sequence. A simulation example is used to demonstrate the effectiveness of this joint ML optimisation scheme for blind adaptive SIMO systems.
An Interval Maximum Entropy Method for Quadratic Programming Problem
RUI Wen-juan; CAO De-xin; SONG Xie-wu
2005-01-01
With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.
Integer Programming Model for Maximum Clique in Graph
YUAN Xi-bo; YANG You; ZENG Xin-hai
2005-01-01
The maximum clique or maximum independent set of graph is a classical problem in graph theory. Combined with Boolean algebra and integer programming, two integer programming models for maximum clique problem,which improve the old results were designed in this paper. Then, the programming model for maximum independent set is a corollary of the main results. These two models can be easily applied to computer algorithm and software, and suitable for graphs of any scale. Finally the models are presented as Lingo algorithms, verified and compared by several examples.
Counterexamples to convergence theorem of maximum-entropy clustering algorithm
于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生
2003-01-01
In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.
Julien Maheut
2015-07-01
proposed to analyze the levels, theoretically con-sidering both physical space as existing times in the system. Finally , an analysis through a simulation based on discrete events with Simio Simulation Software® is proposed.
Modeling Mediterranean ocean climate of the Last Glacial Maximum
U. Mikolajewicz
2010-10-01
Full Text Available A regional ocean general circulation model of the Mediterranean is used to study the climate of the last glacial maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions nontrivial. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of the salinity in the Mediterranean in spite of reduced net evaporation.
Modeling Mediterranean Ocean climate of the Last Glacial Maximum
U. Mikolajewicz
2011-03-01
Full Text Available A regional ocean general circulation model of the Mediterranean is used to study the climate of the Last Glacial Maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions complicated. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of salinity in the Mediterranean in spite of reduced net evaporation.
Paddle River Dam : review of probable maximum flood
Clark, D. [UMA Engineering Ltd., Edmonton, AB (Canada); Neill, C.R. [Northwest Hydraulic Consultants Ltd., Edmonton, AB (Canada)
2008-07-01
The Paddle River Dam was built in northern Alberta in the mid 1980s for flood control. According to the 1999 Canadian Dam Association (CDA) guidelines, this 35 metre high, zoned earthfill dam with a spillway capacity sized to accommodate a probable maximum flood (PMF) is rated as a very high hazard. At the time of design, it was estimated to have a peak flow rate of 858 centimetres. A review of the PMF in 2002 increased the peak flow rate to 1,890 centimetres. In light of a 2007 revision of the CDA safety guidelines, the PMF was reviewed and the inflow design flood (IDF) was re-evaluated. This paper discussed the levels of uncertainty inherent in PMF determinations and some difficulties encountered with the SSARR hydrologic model and the HEC-RAS hydraulic model in unsteady mode. The paper also presented and discussed the analysis used to determine incremental damages, upon which a new IDF of 840 m{sup 3}/s was recommended. The paper discussed the PMF review, modelling methodology, hydrograph inputs, and incremental damage of floods. It was concluded that the PMF review, involving hydraulic routing through the valley bottom together with reconsideration of the previous runoff modeling provides evidence that the peak reservoir inflow could reasonably be reduced by approximately 20 per cent. 8 refs., 5 tabs., 8 figs.
Bremner, Paul G.; Vazquez, Gabriel; Christiano, Daniel J.; Trout, Dawn H.
2016-01-01
Prediction of the maximum expected electromagnetic pick-up of conductors inside a realistic shielding enclosure is an important canonical problem for system-level EMC design of space craft, launch vehicles, aircraft and automobiles. This paper introduces a simple statistical power balance model for prediction of the maximum expected current in a wire conductor inside an aperture enclosure. It calculates both the statistical mean and variance of the immission from the physical design parameters of the problem. Familiar probability density functions can then be used to predict the maximum expected immission for deign purposes. The statistical power balance model requires minimal EMC design information and solves orders of magnitude faster than existing numerical models, making it ultimately viable for scaled-up, full system-level modeling. Both experimental test results and full wave simulation results are used to validate the foundational model.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...
49 CFR 174.86 - Maximum allowable operating speed.
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15...
Parametric optimization of thermoelectric elements footprint for maximum power generation
Rezania, A.; Rosendahl, Lasse; Yin, Hao
2014-01-01
The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost-perform...
30 CFR 56.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 56.19066 Section 56.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 56.19066 Maximum riders in a conveyance. In shafts inclined over 45...
30 CFR 57.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 57.19066 Section 57.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 57.19066 Maximum riders in a conveyance. In shafts inclined over 45...
Maximum Atmospheric Entry Angle for Specified Retrofire Impulse
T. N. Srivastava
1969-07-01
Full Text Available Maximum atmospheric entry angles for vehicles initially moving in elliptic orbits are investigated and it is shown that tangential retrofire impulse at the apogee results in the maximum entry angle. Equivalence of maximizing the entry angle and minimizing the retrofire impulse is also established.
5 CFR 838.711 - Maximum former spouse survivor annuity.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the...
46 CFR 151.45-6 - Maximum amount of cargo.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Maximum amount of cargo. 151.45-6 Section 151.45-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES BARGES CARRYING BULK LIQUID HAZARDOUS MATERIAL CARGOES Operations § 151.45-6 Maximum amount of cargo. (a)...
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... rate effective on the date the supplemental annuity begins, before any reduction for a private pension... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52...
49 CFR 195.406 - Maximum operating pressure.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a...
Maximum-entropy clustering algorithm and its global convergence analysis
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Distribution of maximum loss of fractional Brownian motion with drift
Çağlar, Mine; Vardar-Acar, Ceren
2013-01-01
In this paper, we find bounds on the distribution of the maximum loss of fractional Brownian motion with H >= 1/2 and derive estimates on its tail probability. Asymptotically, the tail of the distribution of maximum loss over [0, t] behaves like the tail of the marginal distribution at time t.
48 CFR 436.575 - Maximum workweek-construction schedule.
2010-10-01
...-construction schedule. 436.575 Section 436.575 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE... Maximum workweek-construction schedule. The contracting officer shall insert the clause at 452.236-75, Maximum Workweek-Construction Schedule, if the clause at FAR 52.236-15 is used and the contractor's...
30 CFR 57.5039 - Maximum permissible concentration.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum permissible concentration. 57.5039... Maximum permissible concentration. Except as provided by standard § 57.5005, persons shall not be exposed to air containing concentrations of radon daughters exceeding 1.0 WL in active workings. ...
5 CFR 550.105 - Biweekly maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Biweekly maximum earnings limitation. 550.105 Section 550.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.105 Biweekly...
5 CFR 550.106 - Annual maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Annual maximum earnings limitation. 550.106 Section 550.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.106 Annual...
32 CFR 842.35 - Depreciation and maximum allowances.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide”...
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Experimental study on prediction model for maximum rebound ratio
LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong
2007-01-01
The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.
Maximum detection range limitation of pulse laser radar with Geiger-mode avalanche photodiode array
Luo, Hanjun; Xu, Benlian; Xu, Huigang; Chen, Jingbo; Fu, Yadan
2015-05-01
When designing and evaluating the performance of laser radar system, maximum detection range achievable is an essential parameter. The purpose of this paper is to propose a theoretical model of maximum detection range for simulating the Geiger-mode laser radar's ranging performance. Based on the laser radar equation and the requirement of the minimum acceptable detection probability, and assuming the primary electrons triggered by the echo photons obey Poisson statistics, the maximum range theoretical model is established. By using the system design parameters, the influence of five main factors, namely emitted pulse energy, noise, echo position, atmospheric attenuation coefficient, and target reflectivity on the maximum detection range are investigated. The results show that stronger emitted pulse energy, lower noise level, more front echo position in the range gate, higher atmospheric attenuation coefficient, and higher target reflectivity can result in greater maximum detection range. It is also shown that it's important to select the minimum acceptable detection probability, which is equivalent to the system signal-to-noise ratio for producing greater maximum detection range and lower false-alarm probability.
Camarrone, Flavio; Ivanova, Anna; Decoster, Wivine; de Jong, Felix; van Hulle, Marc M
2015-01-01
To examine whether the minimum as well as the maximum voice intensity (i.e. sound pressure level, SPL) curves of a voice range profile (VRP) are required when discovering different voice groups based on a clustering analysis. In this approach, no a priori labeling of voice types is used. VRPs of 194 (84 male and 110 female) professional singers were registered and processed. Cluster analysis was performed with the use of features related to (1) both the maximum and minimum SPL curves and (2) the maximum SPL curve only. Features related to the maximum as well as the minimum SPL curves showed three clusters in both male and female voices. These clusters, or voice groups, are based on voice types with similar VRP features. However, when using features related only to the maximum SPL curve, the clusters became less obvious. Features related to the maximum and minimum SPL curves of a VRP are both needed in order to identify the three voice clusters. © 2016 S. Karger AG, Basel.
Understanding the Role of Reservoir Size on Probable Maximum Precipitation
Woldemichael, A. T.; Hossain, F.
2011-12-01
This study addresses the question 'Does surface area of an artificial reservoir matter in the estimation of probable maximum precipitation (PMP) for an impounded basin?' The motivation of the study was based on the notion that the stationarity assumption that is implicit in the PMP for dam design can be undermined in the post-dam era due to an enhancement of extreme precipitation patterns by an artificial reservoir. In addition, the study lays the foundation for use of regional atmospheric models as one way to perform life cycle assessment for planned or existing dams to formulate best management practices. The American River Watershed (ARW) with the Folsom dam at the confluence of the American River was selected as the study region and the Dec-Jan 1996-97 storm event was selected for the study period. The numerical atmospheric model used for the study was the Regional Atmospheric Modeling System (RAMS). First, the numerical modeling system, RAMS, was calibrated and validated with selected station and spatially interpolated precipitation data. Best combinations of parameterization schemes in RAMS were accordingly selected. Second, to mimic the standard method of PMP estimation by moisture maximization technique, relative humidity terms in the model were raised to 100% from ground up to the 500mb level. The obtained model-based maximum 72-hr precipitation values were named extreme precipitation (EP) as a distinction from the PMPs obtained by the standard methods. Third, six hypothetical reservoir size scenarios ranging from no-dam (all-dry) to the reservoir submerging half of basin were established to test the influence of reservoir size variation on EP. For the case of the ARW, our study clearly demonstrated that the assumption of stationarity that is implicit the traditional estimation of PMP can be rendered invalid to a large part due to the very presence of the artificial reservoir. Cloud tracking procedures performed on the basin also give indication of the
The maximum single dose of resistant maltodextrin that does not cause diarrhea in humans.
Kishimoto, Yuka; Kanahori, Sumiko; Sakano, Katsuhisa; Ebihara, Shukuko
2013-01-01
The objective of the present study was to determine the maximum dose of resistant maltodextrin (Fibersol)-2, a non-viscous water-soluble dietary fiber), that does not induce transitory diarrhea. Ten healthy adult subjects (5 men and 5 women) ingested Fibersol-2 at increasing dose levels of 0.7, 0.8, 0.9, 1.0, and 1.1 g/kg body weight (bw). Each administration was separated from the previous dose by an interval of 1 wk. The highest dose level that did not cause diarrhea in any subject was regarded as the maximum non-effective level for a single dose. The results showed that no subject of either sex experienced diarrhea at dose levels of 0.7, 0.8, 0.9, or 1.0 g/kg bw. At the highest dose level of 1.1 g/kg bw, no female subject experienced diarrhea, whereas 1 male subject developed diarrhea with muddy stools 2 h after ingestion of the test substance. Consequently, the maximum non-effective level for a single dose of the resistant maltodextrin Fibersol-2 is 1.0 g/kg bw for men and >1.1 g/kg bw for women. Gastrointestinal symptoms were gurgling sounds in 4 subjects (7 events) and flatus in 5 subjects (9 events), although no association with dose level was observed. These symptoms were mild and transient and resolved without treatment.
The Relationship Between Maximum Isometric Strength and Ball Velocity in the Tennis Serve
Corbi, Francisco; Fuentes, Juan Pedro; Fernández-Fernández, Jaime
2016-01-01
Abstract The aims of this study were to analyze the relationship between maximum isometric strength levels in different upper and lower limb joints and serve velocity in competitive tennis players as well as to develop a prediction model based on this information. Twelve male competitive tennis players (mean ± SD; age: 17.2 ± 1.0 years; body height: 180.1 ± 6.2 cm; body mass: 71.9 ± 5.6 kg) were tested using maximum isometric strength levels (i.e., wrist, elbow and shoulder flexion and extension; leg and back extension; shoulder external and internal rotation). Serve velocity was measured using a radar gun. Results showed a strong positive relationship between serve velocity and shoulder internal rotation (r = 0.67; p elbow and shoulder flexion – extension, leg and back extension and shoulder external rotation (r = 0.36 – 0.53; p = 0.377 – 0.054). Bivariate and multivariate models for predicting serve velocity were developed, with shoulder flexion and internal rotation explaining 55% of the variance in serve velocity (r = 0.74; p < 0.001). The maximum isometric strength level in shoulder internal rotation was strongly related to serve velocity, and a large part of the variability in serve velocity was explained by the maximum isometric strength levels in shoulder internal rotation and shoulder flexion. PMID:28149411
An evaluation of Panicum maximum cv. Gatton: 2. The influence of ...
Unknown
Abstract. The aim of the study was to evaluate the nutritional value of Panicum maximum cv. .... Table 2 Mean (± s.d.) chemical composition of oesophageal samples .... solubility, energy content of the diet and level of intake (Van Soest, 1982), ...
Modelling the maximum voluntary joint torque/angular velocity relationship in human movement.
Yeadon, Maurice R; King, Mark A; Wilson, Cassie
2006-01-01
The force exerted by a muscle is a function of the activation level and the maximum (tetanic) muscle force. In "maximum" voluntary knee extensions muscle activation is lower for eccentric muscle velocities than for concentric velocities. The aim of this study was to model this "differential activation" in order to calculate the maximum voluntary knee extensor torque as a function of knee angular velocity. Torque data were collected on two subjects during maximal eccentric-concentric knee extensions using an isovelocity dynamometer with crank angular velocities ranging from 50 to 450 degrees s(-1). The theoretical tetanic torque/angular velocity relationship was modelled using a four parameter function comprising two rectangular hyperbolas while the activation/angular velocity relationship was modelled using a three parameter function that rose from submaximal activation for eccentric velocities to full activation for high concentric velocities. The product of these two functions gave a seven parameter function which was fitted to the joint torque/angular velocity data, giving unbiased root mean square differences of 1.9% and 3.3% of the maximum torques achieved. Differential activation accounts for the non-hyperbolic behaviour of the torque/angular velocity data for low concentric velocities. The maximum voluntary knee extensor torque that can be exerted may be modelled accurately as the product of functions defining the maximum torque and the maximum voluntary activation level. Failure to include differential activation considerations when modelling maximal movements will lead to errors in the estimation of joint torque in the eccentric phase and low velocity concentric phase.
Pan, Sudip; Solà, Miquel; Chattaraj, Pratim K
2013-02-28
Hardness and electrophilicity values for several molecules involved in different chemical reactions are calculated at various levels of theory and by using different basis sets. Effects of these aspects as well as different approximations to the calculation of those values vis-à-vis the validity of the maximum hardness and minimum electrophilicity principles are analyzed in the cases of some representative reactions. Among 101 studied exothermic reactions, 61.4% and 69.3% of the reactions are found to obey the maximum hardness and minimum electrophilicity principles, respectively, when hardness of products and reactants is expressed in terms of their geometric means. However, when we use arithmetic mean, the percentage reduces to some extent. When we express the hardness in terms of scaled hardness, the percentage obeying maximum hardness principle improves. We have observed that maximum hardness principle is more likely to fail in the cases of very hard species like F(-), H(2), CH(4), N(2), and OH appearing in the reactant side and in most cases of the association reactions. Most of the association reactions obey the minimum electrophilicity principle nicely. The best results (69.3%) for the maximum hardness and minimum electrophilicity principles reject the 50% null hypothesis at the 2% level of significance.
2010-07-01
... as specified in 40 CFR 1065.610. This is the maximum in-use engine speed used for calculating the NOX... procedures of 40 CFR part 1065, based on the manufacturer's design and production specifications for the..., power density, and maximum in-use engine speed. 1042.140 Section 1042.140 Protection of...
2012-04-03
... supply. The FAS underlies approximately 100,000 square miles (258,000 km\\2\\) in southern Alabama..., crustaceans, fish, sea turtles, and marine mammals. The portion of Biscayne Bay adjacent to Turkey Point is... smallii tiny polygala E Insects Heraclides aristodemus schaus swallowtail E ponceanus. butterfly....
40 CFR 141.63 - Maximum contaminant levels (MCLs) for microbiological contaminants.
2010-07-01
... pose an acute risk to health. (c) A public water system must determine compliance with the MCL for... distribution system including appropriate pipe replacement and repair procedures, main flushing programs... total coliforms in a sample, rather than coliform density. (1) For a system which collects at least...
40 CFR Appendix A1 to Subpart F of... - Generic Maximum Contaminant Levels
2010-07-01
... Appendix A1 to Subpart F of Part 82 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...-pressure appliances 1). Water 10 ppm by weight 20 ppm by weight (for refrigerants used in low-pressure... is 3ppm) No visible turbidity. 1 Low-pressure appliances means an appliance that uses a refrigerant...
40 CFR 142.65 - Variances and exemptions from the maximum contaminant levels for radionuclides.
2010-07-01
.... (i) The Administrator, pursuant to section 1415(a)(1)(A) of the Act, hereby identifies the following... Administrator hereby identifies the following as the best available technology, treatment techniques, or other... waters; competing anion concentrations may affect regeneration frequency. 11. Enhanced...
40 CFR 142.63 - Variances and exemptions from the maximum contaminant level for total coliforms.
2010-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER... pathogenic contamination, a treatment lapse or deficiency, or a problem in the operation or maintenance of...
Re-evaluation of human-toxicological maximum permissible risk levels
Baars AJ; Theelen RMC; Janssen PJCM; Hesse JM; Apeldoorn ME van; Meijerink MCM; Verdam L; Zeilmaker MJ; CSR
2001-01-01
Soil Intervention Values are generic soil quality standards based on potential risks to humans and eco-systems. These values are used to determine whether or not contaminated soils meet the criteria for "serious soil contamination" as stated in the Dutch Soil Protection Act. With reference to poten
40 CFR 142.60 - Variances from the maximum contaminant level for total trihalomethanes.
2010-07-01
... chloramines, chlorine dioxide or potassium permanganate. (5) Use of powdered activated carbon for THM precursor or TTHM reduction seasonally or intermittently at dosages not to exceed 10 mg/L on an...
Biological bases of the maximum permissible exposure levels of the UK laser standard BS 4803 1983
MacKinlay, Alistair F
1983-01-01
The use of lasers has increased greatly over the past 15 years or so, to the extent that they are now used routinely in many occupational and public situations. There has been an increasing awareness of the potential hazards presented by lasers and substantial efforts have been made to formulate safety standards. In the UK the relevant Safety Standard is the British Standards Institution Standard BS 4803. This Standard was originally published in 1972 and a revision has recently been published (BS 4803: 1983). The revised standard has been developed using the American National Standards Institute Standard, ANSI Z136.1 (1973 onwards), as a model. In other countries, national standards have been similarly formulated, resulting in a large measure of international agreement through participation in the work of the International Electrotechnical Commission (IEC). The bases of laser safety standards are biophysical data on threshold injury effects, particularly on the retina, and the development of theoretical mode...
Calculations of Maximum A-Weighted Sound Levels (dBA) Resulting from Civil Aircraft Operations.
1978-06-01
Department of Transportation ___________________________ Federal Aviation Administration i i . Co ntr ac t or Grant No. Office of Environmental Quality ...RA’i’ t\\ i’ Ni.” tSF~ iN INI \\ ’t ’ R ~\\Nt~ ‘t~rt k\\’R 0’~NVi I ’NMF N’l’:~ Sound Lavals and Loudn ss of IIIusl rat~ve Noks•s ~n Indoo r and Outdoor...impact ot increasitig sound l e v e l . ; on speech. i’his tab le provides outdoor i n t e r f e r e n c e love Is. indoo r tnt cr1 or et i c o levt
40 CFR 141.64 - Maximum contaminant levels for disinfection byproducts.
2010-07-01
... in this paragraph (a): Disinfection byproduct Best available technology Bromate Control of ozone... source water: Disinfection byproduct Best available technology Total trihalomethanes (TTHM) and... disinfection byproducts. 141.64 Section 141.64 Protection of Environment ENVIRONMENTAL PROTECTION...
LANDFILL OPERATION FOR CARBON SEQUESTRATION AND MAXIMUM METHANE EMISSION CONTROL
Don Augenstein; Ramin Yazdani; Rick Moore; Michelle Byars; Jeff Kieffer; Professor Morton Barlaz; Rinav Mehta
2000-02-26
Controlled landfilling is an approach to manage solid waste landfills, so as to rapidly complete methane generation, while maximizing gas capture and minimizing the usual emissions of methane to the atmosphere. With controlled landfilling, methane generation is accelerated to more rapid and earlier completion to full potential by improving conditions (principally moisture, but also temperature) to optimize biological processes occurring within the landfill. Gas is contained through use of surface membrane cover. Gas is captured via porous layers, under the cover, operated at slight vacuum. A field demonstration project has been ongoing under NETL sponsorship for the past several years near Davis, CA. Results have been extremely encouraging. Two major benefits of the technology are reduction of landfill methane emissions to minuscule levels, and the recovery of greater amounts of landfill methane energy in much shorter times, more predictably, than with conventional landfill practice. With the large amount of US landfill methane generated, and greenhouse potency of methane, better landfill methane control can play a substantial role both in reduction of US greenhouse gas emissions and in US renewable energy. The work described in this report, to demonstrate and advance this technology, has used two demonstration-scale cells of size (8000 metric tons [tonnes]), sufficient to replicate many heat and compaction characteristics of larger ''full-scale'' landfills. An enhanced demonstration cell has received moisture supplementation to field capacity. This is the maximum moisture waste can hold while still limiting liquid drainage rate to minimal and safely manageable levels. The enhanced landfill module was compared to a parallel control landfill module receiving no moisture additions. Gas recovery has continued for a period of over 4 years. It is quite encouraging that the enhanced cell methane recovery has been close to 10-fold that experienced with
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Maximum frequency of the decametric radiation from Jupiter
Barrow, C. H.; Alexander, J. K.
1980-01-01
The upper frequency limits of Jupiter's decametric radio emission are found to be essentially the same when observed from the earth or, with considerably higher sensitivity, from the Voyager spacecraft close to Jupiter. This suggests that the maximum frequency is a real cut-off corresponding to a maximum gyrofrequency of about 38-40 MHz at Jupiter. It no longer appears to be necessary to specify different cut-off frequencies for the Io and non-Io emission as the maximum frequencies are roughly the same in each case.
Maximum range of a projectile thrown from constant-speed circular motion
Poljak, Nikola
2016-01-01
The problem of determining the angle at which a point mass launched from ground level with a given speed is a standard exercise in mechanics. Similar, yet conceptually and calculationally more difficult problems have been suggested to improve student proficiency in projectile motion. The problem of determining the maximum distance of a rock thrown from a rotating arm motion is presented and analyzed in detail in this text. The calculational results confirm several conceptually derived conclusions regarding the initial throw position and provide some details on the angles and the way of throwing (underhand or overhand) which produce the maximum throw distance.
Blind Detection of Ultra-faint Streaks with a Maximum Likelihood Method
Dawson, William A; Kamath, Chandrika
2016-01-01
We have developed a maximum likelihood source detection method capable of detecting ultra-faint streaks with surface brightnesses approximately an order of magnitude fainter than the pixel level noise. Our maximum likelihood detection method is a model based approach that requires no a priori knowledge about the streak location, orientation, length, or surface brightness. This method enables discovery of typically undiscovered objects, and enables the utilization of low-cost sensors (i.e., higher-noise data). The method also easily facilitates multi-epoch co-addition. We will present the results from the application of this method to simulations, as well as real low earth orbit observations.
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling; Wang Jun
2013-01-01
In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain...
Maximum Principle for Nonlinear Cooperative Elliptic Systems on IR N
LEADI Liamidi; MARCOS Aboubacar
2011-01-01
We investigate in this work necessary and sufficient conditions for having a Maximum Principle for a cooperative elliptic system on the whole (IR)N.Moreover,we prove the existence of solutions by an approximation method for the considered system.
Maximum Likelihood Factor Structure of the Family Environment Scale.
Fowler, Patrick C.
1981-01-01
Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)
Multiresolution maximum intensity volume rendering by morphological adjunction pyramids
Roerdink, Jos B.T.M.
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Multiresolution Maximum Intensity Volume Rendering by Morphological Adjunction Pyramids
Roerdink, Jos B.T.M.
2001-01-01
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Changes in context and perception of maximum reaching height.
Wagman, Jeffrey B; Day, Brian M
2014-01-01
Successfully performing a given behavior requires flexibility in both perception and behavior. In particular, doing so requires perceiving whether that behavior is possible across the variety of contexts in which it might be performed. Three experiments investigated how (changes in) context (ie point of observation and intended reaching task) influenced perception of maximum reaching height. The results of experiment 1 showed that perceived maximum reaching height more closely reflected actual reaching ability when perceivers occupied a point of observation that was compatible with that required for the reaching task. The results of experiments 2 and 3 showed that practice perceiving maximum reaching height from a given point of observation improved perception of maximum reaching height from a different point of observation, regardless of whether such practice occurred at a compatible or incompatible point of observation. In general, such findings show bounded flexibility in perception of affordances and are thus consistent with a description of perceptual systems as smart perceptual devices.
Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)
U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
16 CFR 1505.8 - Maximum acceptable material temperatures.
2010-01-01
... Association, 155 East 44th Street, New York, NY 10017. Material Degrees C. Degrees F. Capacitors (1) (1) Class... capacitor has no marked temperature limit, the maximum acceptable temperature will be assumed to be 65...
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC GIS Inventory (aka Ramona) — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
PREDICTION OF MAXIMUM DRY DENSITY OF LOCAL GRANULAR ...
methods. A test on a soil of relatively high solid density revealed that the developed relation looses ... where, Pd max is the laboratory maximum dry ... Addis-Jinima Road Rehabilitation. ..... data sets that differ considerably in the magnitude.
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC Education | GIS Inventory — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
A Family of Maximum SNR Filters for Noise Reduction
Huang, Gongping; Benesty, Jacob; Long, Tao;
2014-01-01
This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Estimate of the maximum induced magnetic field in relativistic shocks
Ghorbanalilu, M.; Sadegzadeh, S.
2017-01-01
The proton-driven Weibel instability is a crucial process for amplifying the generated magnetic fields in gamma-ray bursts. An expression for the saturation level of magnetic fields is estimated in a relativistic shock consisting of electron-proton plasmas. Within the shock transition layer, the plasma is modelled with the waterbag and Maxwell-Jüttner distribution functions for asymmetric counter-propagating proton beams and isotropic background electrons, respectively. The proton-driven Weibel-type instability in the linear phase is investigated thoroughly and then the instability conditions and the stabilization mechanisms are considered in details just after the shutdown of the electron Weibel instability. The growth rate of the instability and the saturated magnetic field strength are obtained in terms of the effective proton beam Mach number, asymmetry parameter, and the background electron temperature. In this paper, fully relativistic kinetic treatment is used to formulate the dispersion relation for the proton Weibel-type instability. Then, by using the magnetic trapping criteria, the saturated magnetic field strength is computed. In the present scenario, the instability includes two stages: in the first stage the electron Weibel instability evolves very rapidly, but in the second one because of the free energy stored in the slow counter-propagating proton beams, the instability is further amplified in the context of electrons with an isotropic distribution function. Increment of the growth rate and saturated magnetic field by increasing (decreasing) the effective proton beam Mach number (the asymmetry parameter) is deduced from the results. It is shown that at the temperatures around 108 K a maximum magnetic field up to around 56 G can be detected by this mechanism after the saturation time.
Maziero, G C; Baunwart, C; Toledo, M C
2001-05-01
The theoretical maximum daily intakes (TMDI) of the phenolic antioxidants butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and tertbutyl hydroquinone (TBHQ) in Brazil were estimated using food consumption data derived from a household economic survey and a packaged goods market survey. The estimates were based on maximum levels of use of the food additives specified in national food standards. The calculated intakes of the three additives for the mean consumer were below the ADIs. Estimates of TMDI for BHA, BHT and TBHQ ranged from 0.09 to 0.15, 0.05 to 0.10 and 0.07 to 0.12 mg/kg of body weight, respectively. To check if the additives are actually used at their maximum authorized levels, analytical determinations of these compounds in selected food categories were carried out using HPLC with UV detection. BHT and TBHQ concentrations in foodstuffs considered to be representive sources of these antioxidants in the diet were below the respective maximum permitted levels. BHA was not detected in any of the analysed samples. Based on the maximal approach and on the analytical data, it is unlikely that the current ADI of BHA (0.5 mg/kg body weight), BHT (0.3 mg/kg body weight) and TBHQ (0.7 mg/kg body weight) will be exceeded in practice by the average Brazilian consumer.
On the maximum sufficient range of interstellar vessels
Cartin, Daniel
2011-01-01
This paper considers the likely maximum range of space vessels providing the basis of a mature interstellar transportation network. Using the principle of sufficiency, it is argued that this range will be less than three parsecs for the average interstellar vessel. This maximum range provides access from the Solar System to a large majority of nearby stellar systems, with total travel distances within the network not excessively greater than actual physical distance.
Efficiency at Maximum Power of Interacting Molecular Machines
Golubeva, Natalia; Imparato, Alberto
2012-01-01
We investigate the efficiency of systems of molecular motors operating at maximum power. We consider two models of kinesin motors on a microtubule: for both the simplified and the detailed model, we find that the many-body exclusion effect enhances the efficiency at maximum power of the many- motor...... system, with respect to the single motor case. Remarkably, we find that this effect occurs in a limited region of the system parameters, compatible with the biologically relevant range....
The maximum entropy production principle: two basic questions.
Martyushev, Leonid M
2010-05-12
The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.
A tropospheric ozone maximum over the equatorial Southern Indian Ocean
L. Zhang
2012-05-01
Full Text Available We examine the distribution of tropical tropospheric ozone (O_{3} from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O_{3} during 2005 to 2009 reveal a distinct, persistent O_{3} maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O_{3} observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O_{3} maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O_{3} maximum is dominated by the O_{3} production driven by lightning nitrogen oxides (NO_{x} emissions, which accounts for 62% of the tropospheric column O_{3} in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O_{3} maximum are rather small. The O_{3} productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O_{3} maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O_{3} maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization
Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue
2010-01-01
This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
Maximum probability domains for the analysis of the microscopic structure of liquids
Agostini, Federica; Savin, Andreas; Vuilleumier, Rodolphe
2014-01-01
We introduce the concept of maximum probability domains, developed in the context of the analysis of electronic densities, in the study of the microscopic spatial structures of liquids. The idea of locating a particle in a three dimensional region, by determining the domain where the probability of finding that, and only that, particle is maximum, gives an interesting characterisation of the local structure of the liquid. The optimisation procedure, required for the search of the domain of maximum probability, is carried out by the implementation of the level set method. Some results for few case studies are presented. In particular by looking at liquid water at different densities or at the solvation shells of Na$^+$ always in liquid water.
Estimation of Maximum Allowable PV Connection to LV Residential Power Networks
Demirok, Erhan; Sera, Dezso; Teodorescu, Remus
2011-01-01
transformer or using solar inverters with new grid support features. This study presents a methodology for the estimation of maximum PV hosting capacity including IEC 60076-7 based thermal model of distribution transformer. Certain part of a real distribution network of Braedstrup suburban area in Denmark...... is used in simulation as a case study model. Furthermore, varying solutions (utilizing thermally upgraded insulation paper in transformers, reactive power services from solar inverters, etc.) are implemented on the network under investigation to examine PV penetration level and finally key results learnt......Maximum photovoltaic (PV) hosting capacity of low voltage (LV) power networks is mainly restricted by either thermal limits of network components or grid voltage quality resulted from high penetration of distributed PV systems. This maximum hosting capacity may be lower than the available solar...
Applying the maximum information principle to cell transmission model of tra-ffic flow
刘喜敏; 卢守峰
2013-01-01
This paper integrates the maximum information principle with the Cell Transmission Model (CTM) to formulate the velo-city distribution evolution of vehicle traffic flow. The proposed discrete traffic kinetic model uses the cell transmission model to cal-culate the macroscopic variables of the vehicle transmission, and the maximum information principle to examine the velocity distri-bution in each cell. The velocity distribution based on maximum information principle is solved by the Lagrange multiplier method. The advantage of the proposed model is that it can simultaneously calculate the hydrodynamic variables and velocity distribution at the cell level. An example shows how the proposed model works. The proposed model is a hybrid traffic simulation model, which can be used to understand the self-organization phenomena in traffic flows and predict the traffic evolution.
Evaluation of Maximum O2 Consumption: Using Ergo-Spirometry in Severe Heart Failure
Majid Malekmohammad
2012-09-01
Full Text Available Although sport-physiologists have repeatedly analyzed respiratory gases through exercise, it is relatively new in the cardiovascular field and is obviously more acceptable than standard exercise test, which gives only information about the existence or absence of cardiovascular diseases (CVDs. Through the new method of exercise test, parameters including aerobic and anaerobic are checked and monitored. 22 severe cases of heart failure, who were candidates of heart transplantation, referring to Massih Daneshvari Hospital in Tehran from Nov. 2007 to Nov. 2008 enrolled this study. The study was designed as a cross-sectional performance and evaluated only patients with ejection fraction less than 30%. O2 mean consumption was 6.27±4.9 ml/kg/min at rest and 9.48±3.38 at anaerobic threshold (AT exceeding 13 ml/kg/min in maximum which was significantly more than the expected levels. Respiratory exchange ratio (RER was over 1 for all patients. This study could not find any statistical correlations between VO2 max and participants' ergonomic factors such as age, height, weight, BMI, as well as EF. This study showed no significant correlation between VO2 max and maximum heart rate (HR max, although O2 maximum consumption was rationally correlated with expiratory ventilation. This means that the patients achieved maximum ventilation through exercise in this study, but failed to have their maximum heart rate being led probably by HF-induced brady-arrhythmia or deconditioning of skeletal muscles.
Assurance of the Maximum Destruction in Battlefield using Cost-Effective Approximation Techniques
Fariha Tasmin Jaigirdar
2012-12-01
Full Text Available Military Applications of Wireless Sensor Network in domains of maximizing security and gaining maximum benefits while attacking the opponent is a challenging and prominent area of research now-a-days. A commander’s goal in a battle field is not limited by securing his troops and the country but also to deliver proper commands to assault the enemies using the minimum number of resources. In this paper, we propose two efficient and low cost approximation algorithms—the maximum clique analysis and the maximum degree analysis techniques. Both of the techniques find the strategies of maximizing the destruction in a battlefield to defeat the opponent by utilizing limited resources. Experimental results show the effectiveness of the proposed algorithms in the prescribed areas of applications. Gaining the cost-effectiveness of the algorithms are also major concerns of this research. A comparative study explaining the number of resources required for commencing required level of destruction made to the opponents has been provided in this paper. The studies show that the maximum degree analysis technique is able to perform more destruction than the maximum clique analysis technique using same number of resources and requires relatively less computational complexity as well.
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
Individual Module Maximum Power Point Tracking for Thermoelectric Generator Systems
Vadstrup, Casper; Schaltz, Erik; Chen, Min
2013-07-01
In a thermoelectric generator (TEG) system the DC/DC converter is under the control of a maximum power point tracker which ensures that the TEG system outputs the maximum possible power to the load. However, if the conditions, e.g., temperature, health, etc., of the TEG modules are different, each TEG module will not produce its maximum power. If each TEG module is controlled individually, each TEG module can be operated at its maximum power point and the TEG system output power will therefore be higher. In this work a power converter based on noninverting buck-boost converters capable of handling four TEG modules is presented. It is shown that, when each module in the TEG system is operated under individual maximum power point tracking, the system output power for this specific application can be increased by up to 8.4% relative to the situation when the modules are connected in series and 16.7% relative to the situation when the modules are connected in parallel.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
Predicting species' maximum dispersal distances from simple plant traits.
Tamme, Riin; Götzenberger, Lars; Zobel, Martin; Bullock, James M; Hooftman, Danny A P; Kaasik, Ants; Pärtel, Meelis
2014-02-01
Many studies have shown plant species' dispersal distances to be strongly related to life-history traits, but how well different traits can predict dispersal distances is not yet known. We used cross-validation techniques and a global data set (576 plant species) to measure the predictive power of simple plant traits to estimate species' maximum dispersal distances. Including dispersal syndrome (wind, animal, ant, ballistic, and no special syndrome), growth form (tree, shrub, herb), seed mass, seed release height, and terminal velocity in different combinations as explanatory variables we constructed models to explain variation in measured maximum dispersal distances and evaluated their power to predict maximum dispersal distances. Predictions are more accurate, but also limited to a particular set of species, if data on more specific traits, such as terminal velocity, are available. The best model (R2 = 0.60) included dispersal syndrome, growth form, and terminal velocity as fixed effects. Reasonable predictions of maximum dispersal distance (R2 = 0.53) are also possible when using only the simplest and most commonly measured traits; dispersal syndrome and growth form together with species taxonomy data. We provide a function (dispeRsal) to be run in the software package R. This enables researchers to estimate maximum dispersal distances with confidence intervals for plant species using measured traits as predictors. Easily obtainable trait data, such as dispersal syndrome (inferred from seed morphology) and growth form, enable predictions to be made for a large number of species.
Prediction of three dimensional maximum isometric neck strength.
Fice, Jason B; Siegmund, Gunter P; Blouin, Jean-Sébastien
2014-09-01
We measured maximum isometric neck strength under combinations of flexion/extension, lateral bending and axial rotation to determine whether neck strength in three dimensions (3D) can be predicted from principal axes strength. This would allow biomechanical modelers to validate their neck models across many directions using only principal axis strength data. Maximum isometric neck moments were measured in 9 male volunteers (29±9 years) for 17 directions. The 3D moments were normalized by the principal axis moments, and compared to unity for all directions tested. Finally, each subject's maximum principal axis moments were used to predict their resultant moment in the off-axis directions. Maximum moments were 30±6 N m in flexion, 32±9 N m in lateral bending, 51±11 N m in extension, and 13±5 N m in axial rotation. The normalized 3D moments were not significantly different from unity (95% confidence interval contained one), except for three directions that combined ipsilateral axial rotation and lateral bending; in these directions the normalized moments exceeded one. Predicted resultant moments compared well to the actual measured values (r2=0.88). Despite exceeding unity, the normalized moments were consistent across subjects to allow prediction of maximum 3D neck strength using principal axes neck strength.
Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert
2017-03-01
The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.
Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert
2016-10-01
The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.
Predicting Maximum Sunspot Number in Solar Cycle 24
Nipa J Bhatt; Rajmal Jain; Malini Aggarwal
2009-03-01
A few prediction methods have been developed based on the precursor technique which is found to be successful for forecasting the solar activity. Considering the geomagnetic activity aa indices during the descending phase of the preceding solar cycle as the precursor, we predict the maximum amplitude of annual mean sunspot number in cycle 24 to be 111 ± 21. This suggests that the maximum amplitude of the upcoming cycle 24 will be less than cycles 21–22. Further, we have estimated the annual mean geomagnetic activity aa index for the solar maximum year in cycle 24 to be 20.6 ± 4.7 and the average of the annual mean sunspot number during the descending phase of cycle 24 is estimated to be 48 ± 16.8.
Construction and enumeration of Boolean functions with maximum algebraic immunity
ZHANG WenYing; WU ChuanKun; LIU XiangZhong
2009-01-01
Algebraic immunity is a new cryptographic criterion proposed against algebraic attacks. In order to resist algebraic attacks, Boolean functions used in many stream ciphers should possess high algebraic immunity. This paper presents two main results to find balanced Boolean functions with maximum algebraic immunity. Through swapping the values of two bits, and then generalizing the result to swap some pairs of bits of the symmetric Boolean function constructed by Dalai, a new class of Boolean functions with maximum algebraic immunity are constructed. Enumeration of such functions is also given. For a given function p(x) with deg(p(x)) < [n/2], we give a method to construct functions in the form p(x)+q(x) which achieve the maximum algebraic immunity, where every term with nonzero coefficient in the ANF of q(x) has degree no less than [n/2].
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
Mass mortality of the vermetid gastropod Ceraesignum maximum
Brown, A. L.; Frazer, T. K.; Shima, J. S.; Osenberg, C. W.
2016-09-01
Ceraesignum maximum (G.B. Sowerby I, 1825), formerly Dendropoma maximum, was subject to a sudden, massive die-off in the Society Islands, French Polynesia, in 2015. On Mo'orea, where we have detailed documentation of the die-off, these gastropods were previously found in densities up to 165 m-2. In July 2015, we surveyed shallow back reefs of Mo'orea before, during and after the die-off, documenting their swift decline. All censused populations incurred 100% mortality. Additional surveys and observations from Mo'orea, Tahiti, Bora Bora, and Huahine (but not Taha'a) suggested a similar, and approximately simultaneous, die-off. The cause(s) of this cataclysmic mass mortality are currently unknown. Given the previously documented negative effects of C. maximum on corals, we expect the die-off will have cascading effects on the reef community.
The optimal polarizations for achieving maximum contrast in radar images
Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.
1988-01-01
There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
Influence of maximum decking charge on intensity of blasting vibration
无
2006-01-01
Based on the character of short-time non-stationary random signal, the relationship between the maximum decking charge and energy distribution of blasting vibration signals was investigated by means of the wavelet packet method. Firstly, the characteristics of wavelet transform and wavelet packet analysis were described. Secondly, the blasting vibration signals were analyzed by wavelet packet based on software MATLAB, and the change of energy distribution curve at different frequency bands were obtained. Finally, the law of energy distribution of blasting vibration signals changing with the maximum decking charge was analyzed. The results show that with the increase of decking charge, the ratio of the energy of high frequency to total energy decreases, the dominant frequency bands of blasting vibration signals tend towards low frequency and blasting vibration does not depend on the maximum decking charge.
The subsequence weight distribution of summed maximum length digital sequences
Weathers, G. D.; Graf, E. R.; Wallace, G. R.
1974-01-01
An attempt is made to develop mathematical formulas to provide the basis for the design of pseudorandom signals intended for applications requiring accurate knowledge of the statistics of the signals. The analysis approach involves calculating the first five central moments of the weight distribution of subsequences of hybrid-sum sequences. The hybrid-sum sequence is formed from the modulo-two sum of k maximum length sequences and is an extension of the sum sequences formed from two maximum length sequences that Gilson (1966) evaluated. The weight distribution of the subsequences serves as an approximation to the filtering process. The basic reason for the analysis of hybrid-sum sequences is to establish a large group of sequences with good statistical properties. It is shown that this can be accomplished much more efficiently using the hybrid-sum approach rather than forming the group strictly from maximum length sequences.
Maximum power point tracking for optimizing energy harvesting process
Akbari, S.; Thang, P. C.; Veselov, D. S.
2016-10-01
There has been a growing interest in using energy harvesting techniques for powering wireless sensor networks. The reason for utilizing this technology can be explained by the sensors limited amount of operation time which results from the finite capacity of batteries and the need for having a stable power supply in some applications. Energy can be harvested from the sun, wind, vibration, heat, etc. It is reasonable to develop multisource energy harvesting platforms for increasing the amount of harvesting energy and to mitigate the issue concerning the intermittent nature of ambient sources. In the context of solar energy harvesting, it is possible to develop algorithms for finding the optimal operation point of solar panels at which maximum power is generated. These algorithms are known as maximum power point tracking techniques. In this article, we review the concept of maximum power point tracking and provide an overview of the research conducted in this area for wireless sensor networks applications.
Proscriptive Bayesian Programming and Maximum Entropy: a Preliminary Study
Koike, Carla Cavalcante
2008-11-01
Some problems found in robotics systems, as avoiding obstacles, can be better described using proscriptive commands, where only prohibited actions are indicated in contrast to prescriptive situations, which demands that a specific command be specified. An interesting question arises regarding the possibility to learn automatically if proscriptive commands are suitable and which parametric function could be better applied. Lately, a great variety of problems in robotics domain are object of researches using probabilistic methods, including the use of Maximum Entropy in automatic learning for robot control systems. This works presents a preliminary study on automatic learning of proscriptive robot control using maximum entropy and using Bayesian Programming. It is verified whether Maximum entropy and related methods can favour proscriptive commands in an obstacle avoidance task executed by a mobile robot.
Multitime maximum principle approach of minimal submanifolds and harmonic maps
Udriste, Constantin
2011-01-01
Some optimization problems coming from the Differential Geometry, as for example, the minimal submanifolds problem and the harmonic maps problem are solved here via interior solutions of appropriate multitime optimal control problems. Section 1 underlines some science domains where appear multitime optimal control problems. Section 2 (Section 3) recalls the multitime maximum principle for optimal control problems with multiple (curvilinear) integral cost functionals and $m$-flow type constraint evolution. Section 4 shows that there exists a multitime maximum principle approach of multitime variational calculus. Section 5 (Section 6) proves that the minimal submanifolds (harmonic maps) are optimal solutions of multitime evolution PDEs in an appropriate multitime optimal control problem. Section 7 uses the multitime maximum principle to show that of all solids having a given surface area, the sphere is the one having the greatest volume. Section 8 studies the minimal area of a multitime linear flow as optimal c...
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Approximate maximum-entropy moment closures for gas dynamics
McDonald, James G.
2016-11-01
Accurate prediction of flows that exist between the traditional continuum regime and the free-molecular regime have proven difficult to obtain. Current methods are either inaccurate in this regime or prohibitively expensive for practical problems. Moment closures have long held the promise of providing new, affordable, accurate methods in this regime. The maximum-entropy hierarchy of closures seems to offer particularly attractive physical and mathematical properties. Unfortunately, several difficulties render the practical implementation of maximum-entropy closures very difficult. This work examines the use of simple approximations to these maximum-entropy closures and shows that physical accuracy that is vastly improved over continuum methods can be obtained without a significant increase in computational cost. Initially the technique is demonstrated for a simple one-dimensional gas. It is then extended to the full three-dimensional setting. The resulting moment equations are used for the numerical solution of shock-wave profiles with promising results.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Remarks on the strong maximum principle for nonlocal operators
Jerome Coville
2008-05-01
Full Text Available In this note, we study the existence of a strong maximum principle for the nonlocal operator $$ mathcal{M}[u](x :=int_{G}J(gu(x*g^{-1}dmu(g - u(x, $$ where $G$ is a topological group acting continuously on a Hausdorff space $X$ and $u in C(X$. First we investigate the general situation and derive a pre-maximum principle. Then we restrict our analysis to the case of homogeneous spaces (i.e., $ X=G /H$. For such Hausdorff spaces, depending on the topology, we give a condition on $J$ such that a strong maximum principle holds for $mathcal{M}$. We also revisit the classical case of the convolution operator (i.e. $G=(mathbb{R}^n,+, X=mathbb{R}^n, dmu =dy$.
Resource-constrained maximum network throughput on space networks
Yanling Xing; Ning Ge; Youzheng Wang
2015-01-01
This paper investigates the maximum network through-put for resource-constrained space networks based on the delay and disruption-tolerant networking (DTN) architecture. Specifical y, this paper proposes a methodology for calculating the maximum network throughput of multiple transmission tasks under storage and delay constraints over a space network. A mixed-integer linear programming (MILP) is formulated to solve this problem. Simula-tions results show that the proposed methodology can successful y calculate the optimal throughput of a space network under storage and delay constraints, as wel as a clear, monotonic relationship between end-to-end delay and the maximum network throughput under storage constraints. At the same time, the optimization re-sults shine light on the routing and transport protocol design in space communication, which can be used to obtain the optimal network throughput.