Handelman's hierarchy for the maximum stable set problem
Laurent, M.; Sun, Z.
2014-01-01
The maximum stable set problem is a well-known NP-hard problem in combinatorial optimization, which can be formulated as the maximization of a quadratic square-free polynomial over the (Boolean) hypercube. We investigate a hierarchy of linear programming relaxations for this problem, based on a
A Maximum Resonant Set of Polyomino Graphs
Directory of Open Access Journals (Sweden)
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The Myopic Stable Set for Social Environments
Demuynck, Thomas; Herings, P. Jean-Jacques; Saulle, Riccardo; Seel, Christian
2017-01-01
We introduce a new solution concept for models of coalition formation, called the myopic stable set. The myopic stable set is defined for a very general class of social environments and allows for an infinite state space. We show that the myopic stable set exists and is non-empty. Under minor
Weighted Maximum-Clique Transversal Sets of Graphs
Chuan-Min Lee
2011-01-01
A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...
Linear Time Local Approximation Algorithm for Maximum Stable Marriage
Directory of Open Access Journals (Sweden)
Zoltán Király
2013-08-01
Full Text Available We consider a two-sided market under incomplete preference lists with ties, where the goal is to find a maximum size stable matching. The problem is APX-hard, and a 3/2-approximation was given by McDermid [1]. This algorithm has a non-linear running time, and, more importantly needs global knowledge of all preference lists. We present a very natural, economically reasonable, local, linear time algorithm with the same ratio, using some ideas of Paluch [2]. In this algorithm every person make decisions using only their own list, and some information asked from members of these lists (as in the case of the famous algorithm of Gale and Shapley. Some consequences to the Hospitals/Residents problem are also discussed.
Optimum detection for extracting maximum information from symmetric qubit sets
International Nuclear Information System (INIS)
Mizuno, Jun; Fujiwara, Mikio; Sasaki, Masahide; Akiba, Makoto; Kawanishi, Tetsuya; Barnett, Stephen M.
2002-01-01
We demonstrate a class of optimum detection strategies for extracting the maximum information from sets of equiprobable real symmetric qubit states of a single photon. These optimum strategies have been predicted by Sasaki et al. [Phys. Rev. A 59, 3325 (1999)]. The peculiar aspect is that the detections with at least three outputs suffice for optimum extraction of information regardless of the number of signal elements. The cases of ternary (or trine), quinary, and septenary polarization signals are studied where a standard von Neumann detection (a projection onto a binary orthogonal basis) fails to access the maximum information. Our experiments demonstrate that it is possible with present technologies to attain about 96% of the theoretical limit
Tutte sets in graphs II: The complexity of finding maximum Tutte sets
Bauer, D.; Broersma, Haitze J.; Kahl, N.; Morgana, A.; Schmeichel, E.; Surowiec, T.
2007-01-01
A well-known formula of Tutte and Berge expresses the size of a maximum matching in a graph $G$ in terms of what is usually called the deficiency. A subset $X$ of $V(G)$ for which this deficiency is attained is called a Tutte set of $G$. While much is known about maximum matchings, less is known
Conditions for maximum isolation of stable condensate during separation in gas-condensate systems
Energy Technology Data Exchange (ETDEWEB)
Trivus, N.A.; Belkina, N.A.
1969-02-01
A thermodynamic analysis is made of the gas-liquid separation process in order to determine the relationship between conditions of maximum stable condensate separation and physico-chemical nature and composition of condensate. The analysis was made by considering the multicomponent gas-condensate fluid produced from Zyrya field as a ternary system, composed of methane, an intermediate component (propane and butane) and a heavy residue, C/sub 6+/. Composition of 5 ternary systems was calculated for a wide variation in separator conditions. At each separator pressure there is maximum condensate production at a certain temperature. This occurs because solubility of condensate components changes with temperature. Results of all calculations are shown graphically. The graphs show conditions of maximum stable condensate separation.
Maximum margin classifier working in a set of strings.
Koyano, Hitoshi; Hayashida, Morihiro; Akutsu, Tatsuya
2016-03-01
Numbers and numerical vectors account for a large portion of data. However, recently, the amount of string data generated has increased dramatically. Consequently, classifying string data is a common problem in many fields. The most widely used approach to this problem is to convert strings into numerical vectors using string kernels and subsequently apply a support vector machine that works in a numerical vector space. However, this non-one-to-one conversion involves a loss of information and makes it impossible to evaluate, using probability theory, the generalization error of a learning machine, considering that the given data to train and test the machine are strings generated according to probability laws. In this study, we approach this classification problem by constructing a classifier that works in a set of strings. To evaluate the generalization error of such a classifier theoretically, probability theory for strings is required. Therefore, we first extend a limit theorem for a consensus sequence of strings demonstrated by one of the authors and co-workers in a previous study. Using the obtained result, we then demonstrate that our learning machine classifies strings in an asymptotically optimal manner. Furthermore, we demonstrate the usefulness of our machine in practical data analysis by applying it to predicting protein-protein interactions using amino acid sequences and classifying RNAs by the secondary structure using nucleotide sequences.
Caley, T.; Roche, D.M.V.A.P.; Waelbroeck, C.; Michel, E.
2014-01-01
We use the fully coupled atmosphere-ocean three-dimensional model of intermediate complexity iLOVECLIM to simulate the climate and oxygen stable isotopic signal during the Last Glacial Maximum (LGM, 21 000 years). By using a model that is able to explicitly simulate the sensor (Î18O), results can be
Directory of Open Access Journals (Sweden)
Salces Judit
2011-08-01
Full Text Available Abstract Background Reference genes with stable expression are required to normalize expression differences of target genes in qPCR experiments. Several procedures and companion software have been proposed to find the most stable genes. Model based procedures are attractive because they provide a solid statistical framework. NormFinder, a widely used software, uses a model based method. The pairwise comparison procedure implemented in GeNorm is a simpler procedure but one of the most extensively used. In the present work a statistical approach based in Maximum Likelihood estimation under mixed models was tested and compared with NormFinder and geNorm softwares. Sixteen candidate genes were tested in whole blood samples from control and heat stressed sheep. Results A model including gene and treatment as fixed effects, sample (animal, gene by treatment, gene by sample and treatment by sample interactions as random effects with heteroskedastic residual variance in gene by treatment levels was selected using goodness of fit and predictive ability criteria among a variety of models. Mean Square Error obtained under the selected model was used as indicator of gene expression stability. Genes top and bottom ranked by the three approaches were similar; however, notable differences for the best pair of genes selected for each method and the remaining genes of the rankings were shown. Differences among the expression values of normalized targets for each statistical approach were also found. Conclusions Optimal statistical properties of Maximum Likelihood estimation joined to mixed model flexibility allow for more accurate estimation of expression stability of genes under many different situations. Accurate selection of reference genes has a direct impact over the normalized expression values of a given target gene. This may be critical when the aim of the study is to compare expression rate differences among samples under different environmental
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
DEFF Research Database (Denmark)
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan
2013-01-01
This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
An electromagnetism-like method for the maximum set splitting problem
Directory of Open Access Journals (Sweden)
Kratica Jozef
2013-01-01
Full Text Available In this paper, an electromagnetism-like approach (EM for solving the maximum set splitting problem (MSSP is applied. Hybrid approach consisting of the movement based on the attraction-repulsion mechanisms combined with the proposed scaling technique directs EM to promising search regions. Fast implementation of the local search procedure additionally improves the efficiency of overall EM system. The performance of the proposed EM approach is evaluated on two classes of instances from the literature: minimum hitting set and Steiner triple systems. The results show, except in one case, that EM reaches optimal solutions up to 500 elements and 50000 subsets on minimum hitting set instances. It also reaches all optimal/best-known solutions for Steiner triple systems.
Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong
2016-01-01
Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.
Setting the renormalization scale in QCD: The principle of maximum conformality
DEFF Research Database (Denmark)
Brodsky, S. J.; Di Giustino, L.
2012-01-01
A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale mu of the running coupling alpha(s)(mu(2)). The purpose of the running coupling in any gauge theory is to sum all terms involving the beta function; in fact, when the renormali......A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale mu of the running coupling alpha(s)(mu(2)). The purpose of the running coupling in any gauge theory is to sum all terms involving the beta function; in fact, when...... the renormalization scale is set properly, all nonconformal beta not equal 0 terms in a perturbative expansion arising from renormalization are summed into the running coupling. The remaining terms in the perturbative series are then identical to that of a conformal theory; i.e., the corresponding theory with beta...... = 0. The resulting scale-fixed predictions using the principle of maximum conformality (PMC) are independent of the choice of renormalization scheme-a key requirement of renormalization group invariance. The results avoid renormalon resummation and agree with QED scale setting in the Abelian limit...
Zhou, Xiaofan; Shen, Xing-Xing; Hittinger, Chris Todd
2018-01-01
Abstract The sizes of the data matrices assembled to resolve branches of the tree of life have increased dramatically, motivating the development of programs for fast, yet accurate, inference. For example, several different fast programs have been developed in the very popular maximum likelihood framework, including RAxML/ExaML, PhyML, IQ-TREE, and FastTree. Although these programs are widely used, a systematic evaluation and comparison of their performance using empirical genome-scale data matrices has so far been lacking. To address this question, we evaluated these four programs on 19 empirical phylogenomic data sets with hundreds to thousands of genes and up to 200 taxa with respect to likelihood maximization, tree topology, and computational speed. For single-gene tree inference, we found that the more exhaustive and slower strategies (ten searches per alignment) outperformed faster strategies (one tree search per alignment) using RAxML, PhyML, or IQ-TREE. Interestingly, single-gene trees inferred by the three programs yielded comparable coalescent-based species tree estimations. For concatenation-based species tree inference, IQ-TREE consistently achieved the best-observed likelihoods for all data sets, and RAxML/ExaML was a close second. In contrast, PhyML often failed to complete concatenation-based analyses, whereas FastTree was the fastest but generated lower likelihood values and more dissimilar tree topologies in both types of analyses. Finally, data matrix properties, such as the number of taxa and the strength of phylogenetic signal, sometimes substantially influenced the programs’ relative performance. Our results provide real-world gene and species tree phylogenetic inference benchmarks to inform the design and execution of large-scale phylogenomic data analyses. PMID:29177474
The simplifying role of pure and stable sets in induced possibilities and necessities
International Nuclear Information System (INIS)
Tsiporkova, E.
1996-01-01
The behaviour of upper and lower (conditional) possibilities and necessities induced by a multivalued mapping is investigated in case pure and stable sets with respect to this multivalued mapping are involved. It is obtained that the upper and lower possibilities (resp. necessities) and the upper and lower conditional possibilities (resp. necessities) of a pure set with respect to the original (non-restricted) mapping coincide. Simplified expressions for the lower conditional possibilities and necessities induced by a multivalued mapping are established in case of a restricting set that is pure with respect to this multivalued mapping. Moreover, if the multivalued mapping involved is non-void and surjective then the interaction of stable sets with the upper and lower (conditional) possibilities and necessities is also studied
The Myopic Stable Set for Social Environments (RM/17/002-revised)
Demuynck, Thomas; Herings, P. Jean-Jacques; Saulle, Riccardo; Seel, Christian
2018-01-01
We introduce a new solution concept for models of coalition formation, called the myopic stable set (MSS). The MSS is defined for a general class of social environments and allows for an infinite state space. An MSS exists and, under minor continuity assumptions, it is also unique. The MSS
Stable SET knockdown in breast cell carcinoma inhibits cell migration and invasion
Energy Technology Data Exchange (ETDEWEB)
Li, Jie [Department of Occupational Health and Occupational Medicine, School of Public Health and Tropical Medicine, Southern Medical University, Guangzhou (China); Key Laboratory of Modern Toxicology of Shenzhen, Shenzhen Center for Disease Control and Prevention, Shenzhen (China); Yang, Xi-fei [Key Laboratory of Modern Toxicology of Shenzhen, Shenzhen Center for Disease Control and Prevention, Shenzhen (China); Ren, Xiao-hu [Department of Occupational Health and Occupational Medicine, School of Public Health and Tropical Medicine, Southern Medical University, Guangzhou (China); Key Laboratory of Modern Toxicology of Shenzhen, Shenzhen Center for Disease Control and Prevention, Shenzhen (China); Meng, Xiao-jing [Department of Occupational Health and Occupational Medicine, School of Public Health and Tropical Medicine, Southern Medical University, Guangzhou (China); Huang, Hai-yan [Key Laboratory of Modern Toxicology of Shenzhen, Shenzhen Center for Disease Control and Prevention, Shenzhen (China); Zhao, Qiong-hui [Shenzhen Entry-Exit Inspection and Quarantine Bureau, Shenzhen (China); Yuan, Jian-hui; Hong, Wen-xu; Xia, Bo; Huang, Xin-feng; Zhou, Li [Key Laboratory of Modern Toxicology of Shenzhen, Shenzhen Center for Disease Control and Prevention, Shenzhen (China); Liu, Jian-jun, E-mail: bio-research@hotmail.com [Key Laboratory of Modern Toxicology of Shenzhen, Shenzhen Center for Disease Control and Prevention, Shenzhen (China); Zou, Fei, E-mail: zoufei616@163.com [Department of Occupational Health and Occupational Medicine, School of Public Health and Tropical Medicine, Southern Medical University, Guangzhou (China)
2014-10-10
Highlights: • We employed RNA interference to knockdown SET expression in breast cancer cells. • Knockdown of SET expression inhibits cell proliferation, migration and invasion. • Knockdown of SET expression increases the activity and expression of PP2A. • Knockdown of SET expression decreases the expression of MMP-9. - Abstract: Breast cancer is the most malignant tumor for women, however, the mechanisms underlying this devastating disease remain unclear. SET is an endogenous inhibitor of protein phosphatase 2A (PP2A) and involved in many physiological and pathological processes. SET could promote the occurrence of tumor through inhibiting PP2A. In this study, we explore the role of SET in the migration and invasion of breast cancer cells MDA-MB-231 and ZR-75-30. The stable suppression of SET expression through lentivirus-mediated RNA interference (RNAi) was shown to inhibit the growth, migration and invasion of breast cancer cells. Knockdown of SET increases the activity and expression of PP2Ac and decrease the expression of matrix metalloproteinase 9 (MMP-9). These data demonstrate that SET may be involved in the pathogenic processes of breast cancer, indicating that SET can serve as a potential therapeutic target for the treatment of breast cancer.
Stable SET knockdown in breast cell carcinoma inhibits cell migration and invasion
International Nuclear Information System (INIS)
Li, Jie; Yang, Xi-fei; Ren, Xiao-hu; Meng, Xiao-jing; Huang, Hai-yan; Zhao, Qiong-hui; Yuan, Jian-hui; Hong, Wen-xu; Xia, Bo; Huang, Xin-feng; Zhou, Li; Liu, Jian-jun; Zou, Fei
2014-01-01
Highlights: • We employed RNA interference to knockdown SET expression in breast cancer cells. • Knockdown of SET expression inhibits cell proliferation, migration and invasion. • Knockdown of SET expression increases the activity and expression of PP2A. • Knockdown of SET expression decreases the expression of MMP-9. - Abstract: Breast cancer is the most malignant tumor for women, however, the mechanisms underlying this devastating disease remain unclear. SET is an endogenous inhibitor of protein phosphatase 2A (PP2A) and involved in many physiological and pathological processes. SET could promote the occurrence of tumor through inhibiting PP2A. In this study, we explore the role of SET in the migration and invasion of breast cancer cells MDA-MB-231 and ZR-75-30. The stable suppression of SET expression through lentivirus-mediated RNA interference (RNAi) was shown to inhibit the growth, migration and invasion of breast cancer cells. Knockdown of SET increases the activity and expression of PP2Ac and decrease the expression of matrix metalloproteinase 9 (MMP-9). These data demonstrate that SET may be involved in the pathogenic processes of breast cancer, indicating that SET can serve as a potential therapeutic target for the treatment of breast cancer
Wang, Yongguang; Roberts, David L; Xu, Baihua; Cao, Rifang; Yan, Min; Jiang, Qiongping
2013-12-30
Accumulated evidence suggests that Social Cognition and Interaction Training (SCIT) is associated with improved performance in social cognition and social skills in patients diagnosed with psychotic disorders. The current study examined the clinical utility of SCIT in patients with schizophrenia in Chinese community settings. Adults with stable schizophrenia were recruited from local community health institutions, and were randomly assigned to SCIT group (n = 22) or a waiting-list control group (n = 17). The SCIT group received the SCIT intervention plus treatment-as-usual, whereas the waiting-list group received only treatment-as-usual during the period of the study. All patients were administered the Chinese versions of the Personal and Social Performance Scale (PSP), Face Emotion Identification Task (FEIT), Eyes task, and Attributional Style Questionnaire (ASQ) at baseline of the SCIT treatment period and at follow-up, 6 months after completion of the 20-week treatment period. Patients in SCIT group showed a significant improvement in the domains of emotion perception, theory of mind, attributional style, and social functioning compared to those in waiting-list group. Findings indicate that SCIT is a feasible and promising method for improving social cognition and social functioning among Chinese outpatients with stable schizophrenia. © 2013 Elsevier Ireland Ltd. All rights reserved.
Global analysis of all linear stable settings of a storage ring lattice
Directory of Open Access Journals (Sweden)
David S Robin
2008-02-01
Full Text Available The traditional process of designing and tuning the magnetic lattice of a particle storage ring lattice to produce certain desired properties is not straightforward. Often solutions are found through trial and error and it is not clear that the solutions are close to optimal. This can be a very unsatisfying process. In this paper we take a step back and look at the general stability limits of the lattice. We employ a technique we call GLASS (GLobal scan of All Stable Settings that allows us to rapidly scan and find all possible stable modes and then characterize their associated properties. In this paper we illustrate how the GLASS technique gives a global and comprehensive vision of the capabilities of the lattice. In a sense, GLASS functions as a lattice observatory clearly displaying all possibilities. The power of the GLASS technique is that it is fast and comprehensive. There is no fitting involved. It gives the lattice designer clear guidance as to where to look for interesting operational points. We demonstrate the technique by applying it to two existing storage ring lattices—the triple bend achromat of the Advanced Light Source and the double bend achromat of CAMD. We show that, using GLASS, we have uncovered many interesting and in some cases previously unknown stability regions.
Kohn, Matthew J.; McKay, Moriah
2010-11-01
Oxygen isotope data provide a key test of general circulation models (GCMs) for the Last Glacial Maximum (LGM) in North America, which have otherwise proved difficult to validate. High δ18O pedogenic carbonates in central Wyoming have been interpreted to indicate increased summer precipitation sourced from the Gulf of Mexico. Here we show that tooth enamel δ18O of large mammals, which is strongly correlated with local water and precipitation δ18O, is lower during the LGM in Wyoming, not higher. Similar data from Texas, California, Florida and Arizona indicate higher δ18O values than in the Holocene, which is also predicted by GCMs. Tooth enamel data closely validate some recent models of atmospheric circulation and precipitation δ18O, including an increase in the proportion of winter precipitation for central North America, and summer precipitation in the southern US, but suggest aridity can bias pedogenic carbonate δ18O values significantly.
Stable operation of a Secure QKD system in the real-world setting
Tomita, Akihisa
2007-06-01
Quantum Key Distribution (QKD) now steps forward from the proof of principle to the validation of the practical feasibility. Nevertheless, the QKD technology should respond to the challenges from the real-world such as stable operation against the fluctuating environment, and security proof under the practical setting. We report our recent progress on stable operation of a QKD system, and key generation with security assurance. A QKD system should robust to temperature fluctuation in a common office environment. We developed a loop-mirror, a substitution of a Faraday mirror, to allow easy compensation for the temperature dependence of the device. Phase locking technique was also employed to synchronize the system clock to the quantum signals. This technique is indispensable for the transmission system based on the installed fiber cables, which stretch and shrink due to the temperature change. The security proof of QKD, however, has assumed the ideal conditions, such as the use of a genuine single photon source and/or unlimited computational resources. It has been highly desirable to give an assurance of security for practical systems, where the ideal conditions are no longer satisfied. We have constructed a theory to estimate the leakage information on the transmitted key under the practically attainable conditions, and have developed a QKD system equipped with software for secure key distillation. The QKD system generates the final key at the rate of 2000 bps after 20 km fiber transmission. Eavesdropper's information on the final key is guaranteed to be less than 2-7 per bit. This is the first successful generation of the secure key with quantitative assurance of the upper bound of the leakage information. It will put forth the realization of highly secure metropolitan optical communication network against any types of eavesdropping.
Mooney, Walter D.; Ritsema, Jeroen; Hwang, Yong Keun
2012-01-01
A joint analysis of global seismicity and seismic tomography indicates that the seismic potential of continental intraplate regions is correlated with the seismic properties of the lithosphere. Archean and Early Proterozoic cratons with cold, stable continental lithospheric roots have fewer crustal earthquakes and a lower maximum earthquake catalog moment magnitude (Mcmax). The geographic distribution of thick lithospheric roots is inferred from the global seismic model S40RTS that displays shear-velocity perturbations (δVS) relative to the Preliminary Reference Earth Model (PREM). We compare δVS at a depth of 175 km with the locations and moment magnitudes (Mw) of intraplate earthquakes in the crust (Schulte and Mooney, 2005). Many intraplate earthquakes concentrate around the pronounced lateral gradients in lithospheric thickness that surround the cratons and few earthquakes occur within cratonic interiors. Globally, 27% of stable continental lithosphere is underlain by δVS≥3.0%, yet only 6.5% of crustal earthquakes with Mw>4.5 occur above these regions with thick lithosphere. No earthquakes in our catalog with Mw>6 have occurred above mantle lithosphere with δVS>3.5%, although such lithosphere comprises 19% of stable continental regions. Thus, for cratonic interiors with seismically determined thick lithosphere (1) there is a significant decrease in the number of crustal earthquakes, and (2) the maximum moment magnitude found in the earthquake catalog is Mcmax=6.0. We attribute these observations to higher lithospheric strength beneath cratonic interiors due to lower temperatures and dehydration in both the lower crust and the highly depleted lithospheric root.
Mooney, Walter D.; Ritsema, Jeroen; Hwang, Yong Keun
2012-12-01
A joint analysis of global seismicity and seismic tomography indicates that the seismic potential of continental intraplate regions is correlated with the seismic properties of the lithosphere. Archean and Early Proterozoic cratons with cold, stable continental lithospheric roots have fewer crustal earthquakes and a lower maximum earthquake catalog moment magnitude (Mcmax). The geographic distribution of thick lithospheric roots is inferred from the global seismic model S40RTS that displays shear-velocity perturbations (δVS) relative to the Preliminary Reference Earth Model (PREM). We compare δVS at a depth of 175 km with the locations and moment magnitudes (Mw) of intraplate earthquakes in the crust (Schulte and Mooney, 2005). Many intraplate earthquakes concentrate around the pronounced lateral gradients in lithospheric thickness that surround the cratons and few earthquakes occur within cratonic interiors. Globally, 27% of stable continental lithosphere is underlain by δVS≥3.0%, yet only 6.5% of crustal earthquakes with Mw>4.5 occur above these regions with thick lithosphere. No earthquakes in our catalog with Mw>6 have occurred above mantle lithosphere with δVS>3.5%, although such lithosphere comprises 19% of stable continental regions. Thus, for cratonic interiors with seismically determined thick lithosphere (1) there is a significant decrease in the number of crustal earthquakes, and (2) the maximum moment magnitude found in the earthquake catalog is Mcmax=6.0. We attribute these observations to higher lithospheric strength beneath cratonic interiors due to lower temperatures and dehydration in both the lower crust and the highly depleted lithospheric root.
Faenza, Y.; Oriolo, G.; Stauffer, G.
2011-01-01
We propose an algorithm for solving the maximum weighted stable set problem on claw-free graphs that runs in O(n^3)-time, drastically improving the previous best known complexity bound. This algorithm is based on a novel decomposition theorem for claw-free graphs, which is also intioduced in the present paper. Despite being weaker than the well-known structure result for claw-free graphs given by Chudnovsky and Seymour, our decomposition theorem is, on the other hand, algorithmic, i.e. it is ...
International Nuclear Information System (INIS)
Tournier, Robert F.
2014-01-01
An undercooled liquid is unstable. The driving force of the glass transition at T g is a change of the undercooled-liquid Gibbs free energy. The classical Gibbs free energy change for a crystal formation is completed including an enthalpy saving. The crystal growth critical nucleus is used as a probe to observe the Laplace pressure change Δp accompanying the enthalpy change −V m ×Δp at T g where V m is the molar volume. A stable glass–liquid transition model predicts the specific heat jump of fragile liquids at T≤T g , the Kauzmann temperature T K where the liquid entropy excess with regard to crystal goes to zero, the equilibrium enthalpy between T K and T g , the maximum nucleation rate at T K of superclusters containing magic atom numbers, and the equilibrium latent heats at T g and T K . Strong-to-fragile and strong-to-strong liquid transitions at T g are also described and all their thermodynamic parameters are determined from their specific heat jumps. The existence of fragile liquids quenched in the amorphous state, which do not undergo liquid–liquid transition during heating preceding their crystallization, is predicted. Long ageing times leading to the formation at T K of a stable glass composed of superclusters containing up to 147 atom, touching and interpenetrating, are evaluated from nucleation rates. A fragile-to-fragile liquid transition occurs at T g without stable-glass formation while a strong glass is stable after transition
Research of Strategic Alliance Stable Decision-making Model Based on Rough Set and DEA
Zhang Yi
2013-01-01
This article uses rough set theory for stability evaluation system of strategic alliance at first. Uses data analysis method for reduction, eliminates redundant indexes. Selected 6 enterprises as a decision-making unit, then select 4 inputs and 2 outputs indexes data, using DEA model to calculate, analysis reasons for poor benefit of decision-making unit, find out improvement direction and quantity for changing, provide a reference for the alliance stability.
Pfützner, Andreas; Musholt, Petra B; Schipper, Christina; Demircik, Filiz; Hengesbach, Carina; Flacke, Frank; Sieber, Jochen; Forst, Thomas
2013-11-01
Hematocrit (HCT) is known to be a confounding factor that interferes with many blood glucose (BG) measurement technologies, resulting in wrong readings. Dynamic electrochemistry has been identified as one possible way to correct for these potential deviations. The purpose of this laboratory investigation was to assess the HCT stability of four BG meters known to employ dynamic electrochemistry (BGStar and iBGStar, Sanofi; Wavesense Jazz, AgaMatrix; Wellion Linus, MedTrust) in comparison with three other devices (GlucoDock, Medisana; OneTouch Verio Pro, LifeScan; FreeStyle Freedom InsuLinx, Abbott-Medisense). Venous heparinized blood was immediately aliquoted after draw and manipulated to contain three different BG concentrations (60-90, 130-160, and 280-320 mg/dl) and five different HCT levels (25%, 35%, 45%, 55%, and 60%). After careful oxygenation to normal blood oxygen pressure, each of the resulting 15 different samples was measured six times with three devices and three strip lots of each meter. The YSI Stat 2300 served as laboratory reference method. Stability to HCT influence was assumed when less than 10% difference occurred between the highest and lowest mean glucose deviations in relation to HCT concentrations [hematocrit interference factor (HIF)]. Five of the investigated self-test meters showed a stable performance with the different HCT levels tested in this investigation: BGStar (HIF 4.6%), iBGStar (6.6%), Wavesense Jazz (4.1%), Wellion Linus (8.5%), and OneTouch Verio Pro (6.2%). The two other meters were influenced by HCT (FreeStyle InsuLinx 17.8%; GlucoDock 46.5%). In this study, meters employing dynamic electrochemistry, as used in the BGStar and iBGStar devices, were shown to correct for potential HCT influence on the meter results. Dynamic electrochemistry appears to be an effective way to handle this interfering condition. © 2013 Diabetes Technology Society.
Pfützner, Andreas; Musholt, Petra B.; Schipper, Christina; Demircik, Filiz; Hengesbach, Carina; Flacke, Frank; Sieber, Jochen; Forst, Thomas
2013-01-01
Background Hematocrit (HCT) is known to be a confounding factor that interferes with many blood glucose (BG) measurement technologies, resulting in wrong readings. Dynamic electrochemistry has been identified as one possible way to correct for these potential deviations. The purpose of this laboratory investigation was to assess the HCT stability of four BG meters known to employ dynamic electrochemistry (BGStar and iBGStar, Sanofi; Wavesense Jazz, AgaMatrix; Wellion Linus, MedTrust) in comparison with three other devices (GlucoDock, Medisana; OneTouch Verio Pro, LifeScan; FreeStyle Freedom InsuLinx, Abbott-Medisense). Methods Venous heparinized blood was immediately aliquoted after draw and manipulated to contain three different BG concentrations (60–90, 130–160, and 280–320 mg/dl) and five different HCT levels (25%, 35%, 45%, 55%, and 60%). After careful oxygenation to normal blood oxygen pressure, each of the resulting 15 different samples was measured six times with three devices and three strip lots of each meter. The YSI Stat 2300 served as laboratory reference method. Stability to HCT influence was assumed when less than 10% difference occurred between the highest and lowest mean glucose deviations in relation to HCT concentrations [hematocrit interference factor (HIF)]. Results Five of the investigated self-test meters showed a stable performance with the different HCT levels tested in this investigation: BGStar (HIF 4.6%), iBGStar (6.6%), Wavesense Jazz (4.1%), Wellion Linus (8.5%), and OneTouch Verio Pro (6.2%). The two other meters were influenced by HCT (FreeStyle InsuLinx 17.8%; GlucoDock 46.5%). Conclusions In this study, meters employing dynamic electrochemistry, as used in the BGStar and iBGStar devices, were shown to correct for potential HCT influence on the meter results. Dynamic electrochemistry appears to be an effective way to handle this interfering condition. PMID:24351179
Time-Dependent Selection of an Optimal Set of Sources to Define a Stable Celestial Reference Frame
Le Bail, Karine; Gordon, David
2010-01-01
Temporal statistical position stability is required for VLBI sources to define a stable Celestial Reference Frame (CRF) and has been studied in many recent papers. This study analyzes the sources from the latest realization of the International Celestial Reference Frame (ICRF2) with the Allan variance, in addition to taking into account the apparent linear motions of the sources. Focusing on the 295 defining sources shows how they are a good compromise of different criteria, such as statistical stability and sky distribution, as well as having a sufficient number of sources, despite the fact that the most stable sources of the entire ICRF2 are mostly in the Northern Hemisphere. Nevertheless, the selection of a stable set is not unique: studying different solutions (GSF005a and AUG24 from GSFC and OPA from the Paris Observatory) over different time periods (1989.5 to 2009.5 and 1999.5 to 2009.5) leads to selections that can differ in up to 20% of the sources. Observing, recording, and network improvement are some of the causes, showing better stability for the CRF over the last decade than the last twenty years. But this may also be explained by the assumption of stationarity that is not necessarily right for some sources.
Shen, Zhiyun; Jiang, Changying; Chen, Liqun
2018-02-01
To evaluate the feasibility and effectiveness of conducting a train-the-trainer (TTT) program for stable coronary artery disease (SCAD) management in community settings. The study involved two steps: (1) tutors trained community nurses as trainers and (2) the community nurses trained patients. 51 community nurses attended a 2-day TTT program and completed questionnaires assessing knowledge, self-efficacy, and satisfaction. By a feasibility and non-randomized control study, 120 SCAD patients were assigned either to intervention group (which received interventions from trained nurses) or control group (which received routine management). Pre- and post-intervention, patients' self-management behaviors and satisfaction were assessed to determine the program's overall impact. Community nurses' knowledge and self-efficacy improved (Pmanagement behaviors (Pmanagement in community settings in China was generally feasible and effective, but many obstacles remain including patients' noncompliance, nurses' busy work schedules, and lack of policy supports. Finding ways to enhance the motivation of community nurses and patients with SCAD are important in implementing community-based TTT programs for SCAD management; further multicenter and randomized control trials are needed. Copyright © 2017 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
R. Schneider
2013-11-01
δ13Catm level in the Penultimate (~ 140 000 yr BP and Last Glacial Maximum (~ 22 000 yr BP, which can be explained by either (i changes in the isotopic composition or (ii intensity of the carbon input fluxes to the combined ocean/atmosphere carbon reservoir or (iii by long-term peat buildup. Our isotopic data suggest that the carbon cycle evolution along Termination II and the subsequent interglacial was controlled by essentially the same processes as during the last 24 000 yr, but with different phasing and magnitudes. Furthermore, a 5000 yr lag in the CO2 decline relative to EDC temperatures is confirmed during the glacial inception at the end of MIS5.5 (120 000 yr BP. Based on our isotopic data this lag can be explained by terrestrial carbon release and carbonate compensation.
International Nuclear Information System (INIS)
Hawke, D.J.
2004-01-01
The lifetime of individual petrel colonies is poorly known. This study used radiocarbon, 13 C, and 15 N analysis of soil to determine the maximum possible age of a colony presently occupied by Westland petrels. A sample of Ap horizon soil in lithic contact was selected for analysis, as soil least likely to have been redistributed by petrel burrowing. Chemical removal of mobile organic matter decreased δ 15 N from 14.0 permille (typical of breeding colony soils) to 6.1 permille (within the range of temperate forest soils without sea-bird breeding). δ 13 C values changed little, from -27.1 permille (untreated soil) to -28.4 permille (treated soil), and were typical of forest soil with with C 3 vegetation and no incorporation of marine C. Duplicate AMS radiocarbon analysis of treated sample yielded a combined conventional radiocarbon age of 864 ± 32 BP, indicating that sea-bird breeding could not have occurred at the site for more than 740-960 calendar years. Initial colony occupation may have occurred much later than this, and not been continuous. Sea bird species other than Westland petrels may also have used the site. (author). 30 refs., 2 figs
Herguera, J. C.; Herbert, T.; Kashgarian, M.; Charles, C.
2010-05-01
Intermediate ocean circulation changes during the last Glacial Maximum (LGM) in the North Pacific have been linked with Northern Hemisphere climate through air-sea interactions, although the extent and the source of the variability of the processes forcing these changes are still not well resolved. The ventilated volumes and ages in the upper wind driven layer are related to the wind stress curl and surface buoyancy fluxes at mid to high latitudes in the North Pacific. In contrast, the deeper thermohaline layers are more effectively ventilated by direct atmosphere-sea exchange during convective formation of Subantarctic Mode Waters (SAMW) and Antarctic Intermediate Waters (AAIW) in the Southern Ocean, the precursors of Pacific Intermediate Waters (PIW) in the North Pacific. Results reported here show a fundamental change in the carbon isotopic gradient between intermediate and deep waters during the LGM in the eastern North Pacific indicating a deepening of nutrient and carbon rich waters. These observations suggest changes in the source and nature of intermediate waters of Southern Ocean origin that feed PIW and enhanced ventilation processes in the North Pacific, further affecting paleoproductivity and export patters in this basin. Furthermore, oxygen isotopic results indicate these changes may have been accomplished in part by changes in circulation affecting the intermediate depths during the LGM.
Directory of Open Access Journals (Sweden)
D. M. D. Hendriks
2008-01-01
Full Text Available A Fast Methane Analyzer (FMA is assessed for its applicability in a closed path eddy covariance field set-up in a peat meadow. The FMA uses off-axis integrated cavity output spectroscopy combined with a highly specific narrow band laser for the detection of CH_{4} and strongly reflective mirrors to obtain a laser path length of 2–20×10^{3} m. Statistical testing and a calibration experiment showed high precision (7.8×10^{−3} ppb and accuracy (<0.30% of the instrument, while no drift was observed. The instrument response time was determined to be 0.10 s. In the field set-up, the FMA is attached to a scroll pump and combined with a 3-axis ultrasonic anemometer and an open path infrared gas analyzer for measurements of carbon dioxide and water vapour. The power-spectra and co-spectra of the instruments were satisfactory for 10 Hz sampling rates.
Due to erroneous measurements, spikes and periods of low turbulence the data series consisted for 26% of gaps. Observed CH_{4} fluxes consisted mainly of emission, showed a diurnal cycle, but were rather variable over. The average CH_{4} emission was 29.7 nmol m^{−2} s^{−1}, while the typical maximum CH_{4} emission was approximately 80.0 nmol m^{−2} s^{−1} and the typical minimum flux was approximately 0.0 nmol m^{−2} s^{−1}. The correspondence of the measurements with flux chamber measurements in the footprint was good and the observed CH_{4} emission rates were comparable with eddy covariance CH_{4} measurements in other peat areas.
Additionally, three measurement techniques with lower sampling frequencies were simulated, which might give the possibility to measure CH_{4} fluxes without an external pump and save energy. Disjunct eddy covariance appeared to be the most reliable substitute for 10 Hz eddy covariance, while relaxed eddy accumulation gave
Energy Technology Data Exchange (ETDEWEB)
Ma, Hong -Hao [Chongqing Univ., Chongqing (People' s Republic of China); Wu, Xing -Gang [Chongqing Univ., Chongqing (People' s Republic of China); Ma, Yang [Chongqing Univ., Chongqing (People' s Republic of China); Brodsky, Stanley J. [Stanford Univ., Stanford, CA (United States); Mojaza, Matin [KTH Royal Inst. of Technology and Stockholm Univ., Stockholm (Sweden)
2015-05-26
A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach to all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
2005-04-29
To) 29-04-2005 Final Report July 2004 to July 2005 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER The appli’eation of an army prospective payment model structured...Z39.18 Prospective Payment Model 1 The Application of an Army Prospective Payment Model Structured on the Standards Set Forth by the CHAMPUS Maximum...Health Care Administration 20060315 090 Prospective Payment Model 2 Acknowledgments I would like to acknowledge my wife, Karen, who allowed me the
Credal Networks under Maximum Entropy
Lukasiewicz, Thomas
2013-01-01
We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...
Directory of Open Access Journals (Sweden)
Hanif Esmail
Full Text Available Cardiovascular disorders are common in HIV-1 infected persons in Africa and presentation is often insidious. Development of screening algorithms for cardiovascular disorders appropriate to a resource-constrained setting could facilitate timely referral. Cardiothoracic ratio (CTR on chest radiograph (CXR has been suggested as a potential screening tool but little is known about its reproducibility and stability. Our primary aim was to evaluate the stability and the inter-observer variability of CTR in HIV-1 infected outpatients. We further evaluated the prevalence of cardiomegaly (CTR≥0.5 and its relationship with other risk factors in this population.HIV-1 infected participants were identified during screening for a tuberculosis vaccine trial in Khayelitsha, South Africa between August 2011 and April 2012. Participants had a digital posterior-anterior CXR performed as well as history, examination and baseline observations. CXRs were viewed using OsiriX software and CTR calculated using digital callipers.450 HIV-1-infected adults were evaluated, median age 34 years (IQR 30-40 with a CD4 count 566/mm3 (IQR 443-724, 70% on antiretroviral therapy (ART. The prevalence of cardiomegaly was 12.7% (95% C.I. 9.6%-15.8%. CTR was calculated by a 2nd reader for 113 participants, measurements were highly correlated r = 0.95 (95% C.I. 0.93-0.97 and agreement of cardiomegaly substantial κ = 0.78 (95% C.I 0.61-0.95. CXR were repeated in 51 participants at 4-12 weeks, CTR measurements between the 2 time points were highly correlated r = 0.77 (95% C.I 0.68-0.88 and agreement of cardiomegaly excellent κ = 0.92 (95% C.I. 0.77-1. Participants with cardiomegaly had a higher median BMI (31.3; IQR 27.4-37.4 versus 26.9; IQR 23.2-32.4; p<0.0001 and median systolic blood pressure (130; IQR 121-141 versus 125; IQR 117-135; p = 0.01.CTR is a robust measurement, stable over time with substantial inter-observer agreement. A prospective study evaluating utility of CXR to
International Nuclear Information System (INIS)
Evans, D.K.
1986-01-01
Seventy-five percent of the world's stable isotope supply comes from one producer, Oak Ridge Nuclear Laboratory (ORNL) in the US. Canadian concern is that foreign needs will be met only after domestic needs, thus creating a shortage of stable isotopes in Canada. This article describes the present situation in Canada (availability and cost) of stable isotopes, the isotope enrichment techniques, and related research programs at Chalk River Nuclear Laboratories (CRNL)
Receiver function estimated by maximum entropy deconvolution
Institute of Scientific and Technical Information of China (English)
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
International Nuclear Information System (INIS)
Anon.
1979-01-01
This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed
DEFF Research Database (Denmark)
Failla, Virgilio; Melillo, Francesca; Reichstein, Toke
2014-01-01
Is entrepreneurship a more stable career choice for high employment turnover individuals? We find that a transition to entrepreneurship induces a shift towards stayer behavior and identify job matching, job satisfaction and lock-in effects as main drivers. These findings have major implications...
International Nuclear Information System (INIS)
Samios, N.P.
1993-01-01
I have been asked to review the subject of stable particles, essentially the particles that eventually comprised the meson and baryon octets. with a few more additions -- with an emphasis on the contributions made by experiments utilizing the bubble chamber technique. In this activity, much work had been done by the photographic emulsion technique and cloud chambers-exposed to cosmic rays as well as accelerator based beams. In fact, many if not most of the stable particles were found by these latter two techniques, however, the forte of the bubble chamber (coupled with the newer and more powerful accelerators) was to verify, and reinforce with large statistics, the existence of these states, to find some of the more difficult ones, mainly neutrals and further to elucidate their properties, i.e., spin, parity, lifetimes, decay parameters, etc
SIMPLE ESTIMATOR AND CONSISTENT STRONGLY OF STABLE DISTRIBUTIONS
Directory of Open Access Journals (Sweden)
Cira E. Guevara Otiniano
2016-06-01
Full Text Available Stable distributions are extensively used to analyze earnings of financial assets, such as exchange rates and stock prices assets. In this paper we propose a simple and strongly consistent estimator for the scale parameter of a symmetric stable L´evy distribution. The advantage of this estimator is that your computational time is minimum thus it can be used to initialize intensive computational procedure such as maximum likelihood. With random samples of sized n we tested the efficacy of these estimators by Monte Carlo method. We also included applications for three data sets.
International Nuclear Information System (INIS)
Brazier, J.L.; Guinamant, J.L.
1995-01-01
According to the progress which has been realised in the technology of separating and measuring isotopes, the stable isotopes are used as preferable 'labelling elements' for big number of applications. The isotopic composition of natural products shows significant variations as a result of different reasons like the climate, the seasons, or their geographic origins. So, it was proved that the same product has a different isotopic composition of alimentary and agriculture products. It is also important in detecting the pharmacological and medical chemicals. This review article deals with the technology, like chromatography and spectrophotometry, adapted to this aim, and some important applications. 17 refs. 6 figs
Energy Technology Data Exchange (ETDEWEB)
Quigg, Chris [Fermilab
2018-04-13
For very heavy quarks, relations derived from heavy-quark symmetry imply novel narrow doubly heavy tetraquark states containing two heavy quarks and two light antiquarks. We predict that double-beauty states will be stable against strong decays, whereas the double-charm states and mixed beauty+charm states will dissociate into pairs of heavy-light mesons. Observing a new double-beauty state through its weak decays would establish the existence of tetraquarks and illuminate the role of heavy color-antitriplet diquarks as hadron constituents.
Maximum Acceleration Recording Circuit
Bozeman, Richard J., Jr.
1995-01-01
Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.
2015-01-01
Stable beams: two simple words that carry so much meaning at CERN. When LHC page one switched from "squeeze" to "stable beams" at 10.40 a.m. on Wednesday, 3 June, it triggered scenes of jubilation in control rooms around the CERN sites, as the LHC experiments started to record physics data for the first time in 27 months. This is what CERN is here for, and it’s great to be back in business after such a long period of preparation for the next stage in the LHC adventure. I’ve said it before, but I’ll say it again. This was a great achievement, and testimony to the hard and dedicated work of so many people in the global CERN community. I could start to list the teams that have contributed, but that would be a mistake. Instead, I’d simply like to say that an achievement as impressive as running the LHC – a machine of superlatives in every respect – takes the combined effort and enthusiasm of everyone ...
Local Search Approaches in Stable Matching Problems
Directory of Open Access Journals (Sweden)
Toby Walsh
2013-10-01
Full Text Available The stable marriage (SM problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools or, more generally, to any two-sided market. In the classical formulation, n men and n women express their preferences (via a strict total order over the members of the other sex. Solving an SM problem means finding a stable marriage where stability is an envy-free notion: no man and woman who are not married to each other would both prefer each other to their partners or to being single. We consider both the classical stable marriage problem and one of its useful variations (denoted SMTI (Stable Marriage with Ties and Incomplete lists where the men and women express their preferences in the form of an incomplete preference list with ties over a subset of the members of the other sex. Matchings are permitted only with people who appear in these preference lists, and we try to find a stable matching that marries as many people as possible. Whilst the SM problem is polynomial to solve, the SMTI problem is NP-hard. We propose to tackle both problems via a local search approach, which exploits properties of the problems to reduce the size of the neighborhood and to make local moves efficiently. We empirically evaluate our algorithm for SM problems by measuring its runtime behavior and its ability to sample the lattice of all possible stable marriages. We evaluate our algorithm for SMTI problems in terms of both its runtime behavior and its ability to find a maximum cardinality stable marriage. Experimental results suggest that for SM problems, the number of steps of our algorithm grows only as O(n log(n, and that it samples very well the set of all stable marriages. It is thus a fair and efficient approach to generate stable marriages. Furthermore, our approach for SMTI problems is able to solve large problems, quickly returning stable matchings of large and often optimal size, despite the
Maximum Quantum Entropy Method
Sim, Jae-Hoon; Han, Myung Joon
2018-01-01
Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...
International Nuclear Information System (INIS)
Biondi, L.
1998-01-01
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it
Determination of Maximum Follow-up Speed of Electrode System of Resistance Projection Welders
DEFF Research Database (Denmark)
Wu, Pei; Zhang, Wenqi; Bay, Niels
2004-01-01
the weld process settings for the stable production and high quality of products. In this paper, the maximum follow-up speed of electrode system was tested by using a special designed device which can be mounted to all types of machine and easily to be applied in industry, the corresponding mathematical...... expression was derived based on a mathematical model. Good accordance was found between test and model....
On Maximum Entropy and Inference
Directory of Open Access Journals (Sweden)
Luigi Gresele
2017-11-01
Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Robust Maximum Association Estimators
A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)
2017-01-01
textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation
International Nuclear Information System (INIS)
Hacker, M.; Hack, N.; Tiling, R.; Jakobs, T.; Nikolaou, K.; Becker, C.; Ziegler, F. von; Knez, A.; Koenig, A.; Klauss, V.
2007-01-01
Aim: In patients with stable angina pectoris both morphological and functional information about the coronary artery tree should be present before revascularization therapy is performed. High accuracy was shown for spiral computed tomography (MDCT) angiography acquired with a 64-slice CT scanner compared to invasive coronary angiography (ICA) in detecting obstructive'' coronary artery disease (CAD). Gated myocardial SPECT (MPI) is an established method for the noninvasive assessment of functional significance of coronary stenoses. Aim of the study was to evaluate the combination of 64-slice CT angiography plus MPI in comparison to ICA plus MPI in the detection of hemodynamically relevant coronary artery stenoses in a clinical setting. Patients, methods: 30 patients (63 ± 10.8 years, 23 men) with stable angina (21 with suspected, 9 with known CAD) were investigated. MPI, 64-slice CT angiography and ICA were performed, reversible and fixed perfusion defects were allocated to determining lesions separately for MDCT angiography and ICA. The combination of MDCT angiography plus MPI was compared to the results of ICA plus MPI. Results: Sensitivity, specificity, negative and positive predictive value for the combination of MDCT angiography plus MPI was 85%, 97%, 98% and 79%, respectively, on a vessel-based and 93%, 87%, 93% and 88%, respectively, on a patient-based level. 19 coronary arteries with stenoses =50% in both ICA and MDCT angiography showed no ischemia in MPI. Conclusion: The combination of 64-slice CT angiography and gated myocardial SPECT enabled a comprehensive non-invasive view of the anatomical and functional status of the coronary artery tree. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Hacker, M.; Hack, N.; Tiling, R. [Klinikum Grosshadern (Germany). Dept. of Nuclear Medicine; Jakobs, T.; Nikolaou, K.; Becker, C. [Klinikum Grosshadern (Germany). Dept. of Clinical Radiology; Ziegler, F. von; Knez, A. [Klinikum Grosshadern (Germany). Dept. of Cardiology; Koenig, A.; Klauss, V. [Medizinische Poliklinik-Innenstadt, Univ. of Munich (Germany). Dept. of Cardiology
2007-07-01
Aim: In patients with stable angina pectoris both morphological and functional information about the coronary artery tree should be present before revascularization therapy is performed. High accuracy was shown for spiral computed tomography (MDCT) angiography acquired with a 64-slice CT scanner compared to invasive coronary angiography (ICA) in detecting ''obstructive'' coronary artery disease (CAD). Gated myocardial SPECT (MPI) is an established method for the noninvasive assessment of functional significance of coronary stenoses. Aim of the study was to evaluate the combination of 64-slice CT angiography plus MPI in comparison to ICA plus MPI in the detection of hemodynamically relevant coronary artery stenoses in a clinical setting. Patients, methods: 30 patients (63 {+-} 10.8 years, 23 men) with stable angina (21 with suspected, 9 with known CAD) were investigated. MPI, 64-slice CT angiography and ICA were performed, reversible and fixed perfusion defects were allocated to determining lesions separately for MDCT angiography and ICA. The combination of MDCT angiography plus MPI was compared to the results of ICA plus MPI. Results: Sensitivity, specificity, negative and positive predictive value for the combination of MDCT angiography plus MPI was 85%, 97%, 98% and 79%, respectively, on a vessel-based and 93%, 87%, 93% and 88%, respectively, on a patient-based level. 19 coronary arteries with stenoses =50% in both ICA and MDCT angiography showed no ischemia in MPI. Conclusion: The combination of 64-slice CT angiography and gated myocardial SPECT enabled a comprehensive non-invasive view of the anatomical and functional status of the coronary artery tree. (orig.)
International Nuclear Information System (INIS)
Enslin, J.H.R.
1990-01-01
A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control
Breysse, Nicolas; Vial, Gaelle; Pattingre, Lauriane; Ossendorp, Bernadette C; Mahieu, Karin; Reich, Hermine; Rietveld, Anton; Sieke, Christian; van der Velde-Koerts, Trijntje; Sarda, Xavier
2018-06-03
Proposals to update the methodology for the international estimated short-term intake (IESTI) equations were made during an international workshop held in Geneva in 2015. Changes to several parameters of the current four IESTI equations (cases 1, 2a, 2b, and 3) were proposed. In this study, the overall impact of these proposed changes on estimates of short-term exposure was studied using the large portion data available in the European Food Safety Authority PRIMo model and the residue data submitted in the framework of the European Maximum Residue Levels (MRL) review under Article 12 of Regulation (EC) No 396/2005. Evaluation of consumer exposure using the current and proposed equations resulted in substantial differences in the exposure estimates; however, there were no significant changes regarding the number of accepted MRLs. For the different IESTI cases, the median ratio of the new versus the current equation is 1.1 for case 1, 1.4 for case 2a, 0.75 for case 2b, and 1 for case 3. The impact, expressed as a shift in the IESTI distribution profile, indicated that the 95th percentile IESTI shifted from 50% of the acute reference dose (ARfD) with the current equations to 65% of the ARfD with the proposed equations. This IESTI increase resulted in the loss of 1.2% of the MRLs (37 out of 3110) tested within this study. At the same time, the proposed equations would have allowed 0.4% of the MRLs (14 out of 3110) that were rejected with the current equations to be accepted. The commodity groups that were most impacted by these modifications are solanacea (e.g., potato, eggplant), lettuces, pulses (dry), leafy brassica (e.g., kale, Chinese cabbage), and pome fruits. The active substances that were most affected were fluazifop-p-butyl, deltamethrin, and lambda-cyhalothrin.
International Nuclear Information System (INIS)
Ponman, T.J.
1984-01-01
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.
2009-01-01
We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.
Directory of Open Access Journals (Sweden)
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Probable maximum flood control
International Nuclear Information System (INIS)
DeGabriele, C.E.; Wu, C.L.
1991-11-01
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1988-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
International Nuclear Information System (INIS)
Rust, D.M.
1984-01-01
The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1989-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Functional Maximum Autocorrelation Factors
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
International Nuclear Information System (INIS)
Ryan, J.
1981-01-01
By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments
National Oceanic and Atmospheric Administration, Department of Commerce — Tissue samples (skin, bone, blood, muscle) are analyzed for stable carbon, stable nitrogen, and stable sulfur analysis. Many samples are used in their entirety for...
Maximum potential preventive effect of hip protectors
van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.
2007-01-01
OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who
Faghihi, V.; Verstappen-Dumoulin, B. M. A. A.; Jansen, H. G.; van Dijk, G.; Aerts-Bijma, A. T.; Kerstel, E. R. T.; Groening, M.; Meijer, H. A. J.
2015-01-01
RATIONALE: Research using water with enriched levels of the rare stable isotopes of hydrogen and/or oxygen requires well-characterized enriched reference waters. The International Atomic Energy Agency (IAEA) did have such reference waters available, but these are now exhausted. New reference waters
Encoding Strategy for Maximum Noise Tolerance Bidirectional Associative Memory
National Research Council Canada - National Science Library
Shen, Dan
2003-01-01
In this paper, the Basic Bidirectional Associative Memory (BAM) is extended by choosing weights in the correlation matrix, for a given set of training pairs, which result in a maximum noise tolerance set for BAM...
Characterizing graphs of maximum matching width at most 2
DEFF Research Database (Denmark)
Jeong, Jisu; Ok, Seongmin; Suh, Geewon
2017-01-01
The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...
Stable convergence and stable limit theorems
Häusler, Erich
2015-01-01
The authors present a concise but complete exposition of the mathematical theory of stable convergence and give various applications in different areas of probability theory and mathematical statistics to illustrate the usefulness of this concept. Stable convergence holds in many limit theorems of probability theory and statistics – such as the classical central limit theorem – which are usually formulated in terms of convergence in distribution. Originated by Alfred Rényi, the notion of stable convergence is stronger than the classical weak convergence of probability measures. A variety of methods is described which can be used to establish this stronger stable convergence in many limit theorems which were originally formulated only in terms of weak convergence. Naturally, these stronger limit theorems have new and stronger consequences which should not be missed by neglecting the notion of stable convergence. The presentation will be accessible to researchers and advanced students at the master's level...
Density estimation by maximum quantum entropy
International Nuclear Information System (INIS)
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-01-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets
Stable configurations of graphene on silicon
Energy Technology Data Exchange (ETDEWEB)
Javvaji, Brahmanandam; Shenoy, Bhamy Maithry [Department of Aerospace Engineering, Indian Institute of Science, Bangalore 560012 (India); Mahapatra, D. Roy, E-mail: droymahapatra@aero.iisc.ernet.in [Department of Aerospace Engineering, Indian Institute of Science, Bangalore 560012 (India); Ravikumar, Abhilash [Department of Metallurgical and Materials Engineering, National Institute of Technology Karnataka, Surathkal 575025 (India); Hegde, G.M. [Center for Nano Science and Engineering, Indian Institute of Science, Bangalore 560012 (India); Rizwan, M.R. [Department of Metallurgical and Materials Engineering, National Institute of Technology Karnataka, Surathkal 575025 (India)
2017-08-31
Highlights: • Simulations of epitaxial growth process for silicon–graphene system is performed. • Identified the most favourable orientation of graphene sheet on silicon substrate. • Atomic local strain due to the silicon–carbon bond formation is analyzed. - Abstract: Integration of graphene on silicon-based nanostructures is crucial in advancing graphene based nanoelectronic device technologies. The present paper provides a new insight on the combined effect of graphene structure and silicon (001) substrate on their two-dimensional anisotropic interface. Molecular dynamics simulations involving the sub-nanoscale interface reveal a most favourable set of temperature independent orientations of the monolayer graphene sheet with an angle of ∽15° between its armchair direction and [010] axis of the silicon substrate. While computing the favorable stable orientations, both the translation and the rotational vibrations of graphene are included. The possible interactions between the graphene atoms and the silicon atoms are identified from their coordination. Graphene sheet shows maximum bonding density with bond length 0.195 nm and minimum bond energy when interfaced with silicon substrate at 15° orientation. Local deformation analysis reveals probability distribution with maximum strain levels of 0.134, 0.047 and 0.029 for 900 K, 300 K and 100 K, respectively in silicon surface for 15° oriented graphene whereas the maximum probable strain in graphene is about 0.041 irrespective of temperature. Silicon–silicon dimer formation is changed due to silicon–carbon bonding. These results may help further in band structure engineering of silicon–graphene lattice.
National Aeronautics and Space Administration — The code in the stableGP package implements Gaussian process calculations using efficient and numerically stable algorithms. Description of the algorithms is in the...
Angina Pectoris (Stable Angina)
... Peripheral Artery Disease Venous Thromboembolism Aortic Aneurysm More Angina Pectoris (Stable Angina) Updated:Aug 21,2017 You may have heard the term “angina pectoris” or “stable angina” in your doctor’s office, ...
Maximum mass of magnetic white dwarfs
International Nuclear Information System (INIS)
Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez
2015-01-01
We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)
Maximum Parsimony on Phylogenetic networks
2012-01-01
Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are
Steeneveld, G.J.
2012-01-01
Understanding and prediction of the stable atmospheric boundary layer is a challenging task. Many physical processes are relevant in the stable boundary layer, i.e. turbulence, radiation, land surface coupling, orographic turbulent and gravity wave drag, and land surface heterogeneity. The development of robust stable boundary layer parameterizations for use in NWP and climate models is hampered by the multiplicity of processes and their unknown interactions. As a result, these models suffer ...
On some topological properties of stable measures
DEFF Research Database (Denmark)
Nielsen, Carsten Krabbe
1996-01-01
Summary The paper shows that the set of stable probability measures and the set of Rational Beliefs relative to a given stationary measure are closed in the strong topology, but not closed in the topology of weak convergence. However, subsets of the set of stable probability measures which...... are characterized by uniformity of convergence of the empirical distribution are closed in the topology of weak convergence. It is demonstrated that such subsets exist. In particular, there is an increasing sequence of sets of SIDS measures who's union is the set of all SIDS measures generated by a particular...... system and such that each subset consists of stable measures. The uniformity requirement has a natural interpretation in terms of plausibility of Rational Beliefs...
Maximum Entropy in Drug Discovery
Directory of Open Access Journals (Sweden)
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Stable isotopes labelled compounds
International Nuclear Information System (INIS)
1982-09-01
The catalogue on stable isotopes labelled compounds offers deuterium, nitrogen-15, and multiply labelled compounds. It includes: (1) conditions of sale and delivery, (2) the application of stable isotopes, (3) technical information, (4) product specifications, and (5) the complete delivery programme
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 21; Issue 9. Evolutionary Stable Strategy: Application of Nash Equilibrium in Biology. General Article Volume 21 Issue 9 September 2016 pp 803- ... Keywords. Evolutionary game theory, evolutionary stable state, conflict, cooperation, biological games.
Steeneveld, G.J.
2012-01-01
Understanding and prediction of the stable atmospheric boundary layer is a challenging task. Many physical processes are relevant in the stable boundary layer, i.e. turbulence, radiation, land surface coupling, orographic turbulent and gravity wave drag, and land surface heterogeneity. The
PTree: pattern-based, stochastic search for maximum parsimony phylogenies
Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.
2013-01-01
Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...
Maximum stellar iron core mass
Indian Academy of Sciences (India)
60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore
A portable storage maximum thermometer
International Nuclear Information System (INIS)
Fayart, Gerard.
1976-01-01
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr
(2+1)-dimensional stable spatial Raman solitons
International Nuclear Information System (INIS)
Shverdin, M.Y.; Yavuz, D.D.; Walker, D.R.
2004-01-01
We analyze the formation, propagation, and interaction of stable two-frequency (2+1)-dimensional solitons, formed in a Raman media driven near maximum molecular coherence. The propagating light is trapped in the two transverse dimensions
Neutron spectra unfolding with maximum entropy and maximum likelihood
International Nuclear Information System (INIS)
Itoh, Shikoh; Tsunoda, Toshiharu
1989-01-01
A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)
Stable Chimeras and Independently Synchronizable Clusters
Cho, Young Sul; Nishikawa, Takashi; Motter, Adilson E.
2017-08-01
Cluster synchronization is a phenomenon in which a network self-organizes into a pattern of synchronized sets. It has been shown that diverse patterns of stable cluster synchronization can be captured by symmetries of the network. Here, we establish a theoretical basis to divide an arbitrary pattern of symmetry clusters into independently synchronizable cluster sets, in which the synchronization stability of the individual clusters in each set is decoupled from that in all the other sets. Using this framework, we suggest a new approach to find permanently stable chimera states by capturing two or more symmetry clusters—at least one stable and one unstable—that compose the entire fully symmetric network.
Normal modified stable processes
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2002-01-01
Gaussian (NGIG) laws. The wider framework thus established provides, in particular, for added flexibility in the modelling of the dynamics of financial time series, of importance especially as regards OU based stochastic volatility models for equities. In the special case of the tempered stable OU process......This paper discusses two classes of distributions, and stochastic processes derived from them: modified stable (MS) laws and normal modified stable (NMS) laws. This extends corresponding results for the generalised inverse Gaussian (GIG) and generalised hyperbolic (GH) or normal generalised inverse...
Applications of stable isotopes
International Nuclear Information System (INIS)
Letolle, R.; Mariotti, A.; Bariac, T.
1991-06-01
This report reviews the historical background and the properties of stable isotopes, the methods used for their measurement (mass spectrometry and others), the present technics for isotope enrichment and separation, and at last the various present and foreseeable application (in nuclear energy, physical and chemical research, materials industry and research; tracing in industrial, medical and agronomical tests; the use of natural isotope variations for environmental studies, agronomy, natural resources appraising: water, minerals, energy). Some new possibilities in the use of stable isotope are offered. A last chapter gives the present state and forecast development of stable isotope uses in France and Europe
Maximum entropy method in momentum density reconstruction
International Nuclear Information System (INIS)
Dobrzynski, L.; Holas, A.
1997-01-01
The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Ancestral sequence reconstruction with Maximum Parsimony
Herbst, Lina; Fischer, Mareike
2017-01-01
One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...
van der Wal, A. C.; Becker, A. E.; Koch, K. T.; Piek, J. J.; Teeling, P.; van der Loos, C. M.; David, G. K.
1996-01-01
OBJECTIVE: To investigate the extent of plaque inflammation in culprit lesions of patients with chronic stable angina. DESIGN: Retrospective study. SETTING: Amsterdam reference centre. SUBJECTS: 89 consecutive patients who underwent directional coronary atherectomy, 58 of whom met the following
National Research Council Canada - National Science Library
Adler, Robert
1997-01-01
We describe how to take a stable, ARMA, time series through the various stages of model identification, parameter estimation, and diagnostic checking, and accompany the discussion with a goodly number...
Maximum Water Hammer Sensitivity Analysis
Jalil Emadi; Abbas Solemani
2011-01-01
Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...
Directory of Open Access Journals (Sweden)
Yunfeng Shan
2008-01-01
Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the ﬁnding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reﬂects the phylogenetic relationship among species in comparison.
LCLS Maximum Credible Beam Power
International Nuclear Information System (INIS)
Clendenin, J.
2005-01-01
The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed
A Note on Interpolation of Stable Processes | Nassiuma | Journal of ...
African Journals Online (AJOL)
Interpolation procedures tailored for gaussian processes may not be applied to infinite variance stable processes. Alternative techniques suitable for a limited set of stable case with index α∈(1,2] were initially studied by Pourahmadi (1984) for harmonizable processes. This was later extended to the ARMA stable process ...
Generic maximum likely scale selection
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...
International Nuclear Information System (INIS)
Axente, Damian
1998-01-01
The most important fields of stable isotope use with examples are presented. These are: 1. Isotope dilution analysis: trace analysis, measurements of volumes and masses; 2. Stable isotopes as tracers: transport phenomena, environmental studies, agricultural research, authentication of products and objects, archaeometry, studies of reaction mechanisms, structure and function determination of complex biological entities, studies of metabolism, breath test for diagnostic; 3. Isotope equilibrium effects: measurement of equilibrium effects, investigation of equilibrium conditions, mechanism of drug action, study of natural processes, water cycle, temperature measurements; 4. Stable isotope for advanced nuclear reactors: uranium nitride with 15 N as nuclear fuel, 157 Gd for reactor control. In spite of some difficulties of stable isotope use, particularly related to the analytical techniques, which are slow and expensive, the number of papers reporting on this subject is steadily growing as well as the number of scientific meetings organized by International Isotope Section and IAEA, Gordon Conferences, and regional meeting in Germany, France, etc. Stable isotope application development on large scale is determined by improving their production technologies as well as those of labeled compound and the analytical techniques. (author)
Extreme Maximum Land Surface Temperatures.
Garratt, J. R.
1992-09-01
There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).
Calcium stable isotope geochemistry
Energy Technology Data Exchange (ETDEWEB)
Gausonne, Nikolaus [Muenster Univ. (Germany). Inst. fuer Mineralogie; Schmitt, Anne-Desiree [Strasbourg Univ. (France). LHyGeS/EOST; Heuser, Alexander [Bonn Univ. (Germany). Steinmann-Inst. fuer Geologie, Mineralogie und Palaeontologie; Wombacher, Frank [Koeln Univ. (Germany). Inst. fuer Geologie und Mineralogie; Dietzel, Martin [Technische Univ. Graz (Austria). Inst. fuer Angewandte Geowissenschaften; Tipper, Edward [Cambridge Univ. (United Kingdom). Dept. of Earth Sciences; Schiller, Martin [Copenhagen Univ. (Denmark). Natural History Museum of Denmark
2016-08-01
This book provides an overview of the fundamentals and reference values for Ca stable isotope research, as well as current analytical methodologies including detailed instructions for sample preparation and isotope analysis. As such, it introduces readers to the different fields of application, including low-temperature mineral precipitation and biomineralisation, Earth surface processes and global cycling, high-temperature processes and cosmochemistry, and lastly human studies and biomedical applications. The current state of the art in these major areas is discussed, and open questions and possible future directions are identified. In terms of its depth and coverage, the current work extends and complements the previous reviews of Ca stable isotope geochemistry, addressing the needs of graduate students and advanced researchers who want to familiarize themselves with Ca stable isotope research.
Calcium stable isotope geochemistry
International Nuclear Information System (INIS)
Gausonne, Nikolaus; Schmitt, Anne-Desiree; Heuser, Alexander; Wombacher, Frank; Dietzel, Martin; Tipper, Edward; Schiller, Martin
2016-01-01
This book provides an overview of the fundamentals and reference values for Ca stable isotope research, as well as current analytical methodologies including detailed instructions for sample preparation and isotope analysis. As such, it introduces readers to the different fields of application, including low-temperature mineral precipitation and biomineralisation, Earth surface processes and global cycling, high-temperature processes and cosmochemistry, and lastly human studies and biomedical applications. The current state of the art in these major areas is discussed, and open questions and possible future directions are identified. In terms of its depth and coverage, the current work extends and complements the previous reviews of Ca stable isotope geochemistry, addressing the needs of graduate students and advanced researchers who want to familiarize themselves with Ca stable isotope research.
System for memorizing maximum values
Bozeman, Richard J., Jr.
1992-08-01
The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.
Remarks on the maximum luminosity
Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon
2018-04-01
The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Scintillation counter, maximum gamma aspect
International Nuclear Information System (INIS)
Thumim, A.D.
1975-01-01
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Exact simulation of max-stable processes.
Dombry, Clément; Engelke, Sebastian; Oesting, Marco
2016-06-01
Max-stable processes play an important role as models for spatial extreme events. Their complex structure as the pointwise maximum over an infinite number of random functions makes their simulation difficult. Algorithms based on finite approximations are often inexact and computationally inefficient. We present a new algorithm for exact simulation of a max-stable process at a finite number of locations. It relies on the idea of simulating only the extremal functions, that is, those functions in the construction of a max-stable process that effectively contribute to the pointwise maximum. We further generalize the algorithm by Dieker & Mikosch (2015) for Brown-Resnick processes and use it for exact simulation via the spectral measure. We study the complexity of both algorithms, prove that our new approach via extremal functions is always more efficient, and provide closed-form expressions for their implementation that cover most popular models for max-stable processes and multivariate extreme value distributions. For simulation on dense grids, an adaptive design of the extremal function algorithm is proposed.
International Nuclear Information System (INIS)
Ishida, T.
1992-01-01
The research has been in four general areas: (1) correlation of isotope effects with molecular forces and molecular structures, (2) correlation of zero-point energy and its isotope effects with molecular structure and molecular forces, (3) vapor pressure isotope effects, and (4) fractionation of stable isotopes. 73 refs, 38 figs, 29 tabs
Interactive Stable Ray Tracing
DEFF Research Database (Denmark)
Dal Corso, Alessandro; Salvi, Marco; Kolb, Craig
2017-01-01
Interactive ray tracing applications running on commodity hardware can suffer from objectionable temporal artifacts due to a low sample count. We introduce stable ray tracing, a technique that improves temporal stability without the over-blurring and ghosting artifacts typical of temporal post-pr...
Kearney, M. Kate
2013-01-01
The concordance genus of a knot is the least genus of any knot in its concordance class. Although difficult to compute, it is a useful invariant that highlights the distinction between the three-genus and four-genus. In this paper we define and discuss the stable concordance genus of a knot, which describes the behavior of the concordance genus under connected sum.
Stable radiographic scanning agents
International Nuclear Information System (INIS)
1976-01-01
Stable compositions which are useful in the preparation of Technetium-99m-based scintigraphic agents are discussed. They are comprised of ascorbic acid or a pharmaceutically acceptable salt or ester thereof in combination with a pertechnetate reducing agent or dissolved in oxidized pertechnetate-99m (sup(99m)TcO 4 - ) solution
Some stable hydromagnetic equilibria
Energy Technology Data Exchange (ETDEWEB)
Johnson, J L; Oberman, C R; Kulsrud, R M; Frieman, E A [Project Matterhorn, Princeton University, Princeton, NJ (United States)
1958-07-01
We have been able to find and investigate the properties of equilibria which are hydromagnetically stable. These equilibria can be obtained, for example, by wrapping conductors helically around the stellarator tube. Systems with I = 3 or 4 are indicated to be optimum for stability purposes. In some cases an admixture of I = 2 fields can be advantageous for achieving equilibrium. (author)
Maximum physical capacity testing in cancer patients undergoing chemotherapy
DEFF Research Database (Denmark)
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...
The worst case complexity of maximum parsimony.
Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal
2014-11-01
One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.
Designing a stable feedback control system for blind image deconvolution.
Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan
2018-05-01
Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.
Maximum entropy and Bayesian methods
International Nuclear Information System (INIS)
Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.
1992-01-01
Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex; Taylor, Andy
2017-06-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.
International Nuclear Information System (INIS)
Tibari, Elghali; Taous, Fouad; Marah, Hamid
2014-01-01
This report presents results related to stable isotopes analysis carried out at the CNESTEN DASTE in Rabat (Morocco), on behalf of Senegal. These analyzes cover 127 samples. These results demonstrate that Oxygen-18 and Deuterium in water analysis were performed by infrared Laser spectroscopy using a LGR / DLT-100 with Autosampler. Also, the results are expressed in δ values (‰) relative to V-SMOW to ± 0.3 ‰ for oxygen-18 and ± 1 ‰ for deuterium.
Forensic Stable Isotope Biogeochemistry
Cerling, Thure E.; Barnette, Janet E.; Bowen, Gabriel J.; Chesson, Lesley A.; Ehleringer, James R.; Remien, Christopher H.; Shea, Patrick; Tipple, Brett J.; West, Jason B.
2016-06-01
Stable isotopes are being used for forensic science studies, with applications to both natural and manufactured products. In this review we discuss how scientific evidence can be used in the legal context and where the scientific progress of hypothesis revisions can be in tension with the legal expectations of widely used methods for measurements. Although this review is written in the context of US law, many of the considerations of scientific reproducibility and acceptance of relevant scientific data span other legal systems that might apply different legal principles and therefore reach different conclusions. Stable isotopes are used in legal situations for comparing samples for authenticity or evidentiary considerations, in understanding trade patterns of illegal materials, and in understanding the origins of unknown decedents. Isotope evidence is particularly useful when considered in the broad framework of physiochemical processes and in recognizing regional to global patterns found in many materials, including foods and food products, drugs, and humans. Stable isotopes considered in the larger spatial context add an important dimension to forensic science.
Maximum entropy principal for transportation
International Nuclear Information System (INIS)
Bilich, F.; Da Silva, R.
2008-01-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
Stable cosmology in chameleon bigravity
De Felice, Antonio; Mukohyama, Shinji; Oliosi, Michele; Watanabe, Yota
2018-02-01
The recently proposed chameleonic extension of bigravity theory, by including a scalar field dependence in the graviton potential, avoids several fine-tunings found to be necessary in usual massive bigravity. In particular it ensures that the Higuchi bound is satisfied at all scales, that no Vainshtein mechanism is needed to satisfy Solar System experiments, and that the strong coupling scale is always above the scale of cosmological interest all the way up to the early Universe. This paper extends the previous work by presenting a stable example of cosmology in the chameleon bigravity model. We find a set of initial conditions and parameters such that the derived stability conditions on general flat Friedmann background are satisfied at all times. The evolution goes through radiation-dominated, matter-dominated, and de Sitter eras. We argue that the parameter space allowing for such a stable evolution may be large enough to encompass an observationally viable evolution. We also argue that our model satisfies all known constraints due to gravitational wave observations so far and thus can be considered as a unique testing ground of gravitational wave phenomenologies in bimetric theories of gravity.
Super-stable Poissonian structures
International Nuclear Information System (INIS)
Eliazar, Iddo
2012-01-01
In this paper we characterize classes of Poisson processes whose statistical structures are super-stable. We consider a flow generated by a one-dimensional ordinary differential equation, and an ensemble of particles ‘surfing’ the flow. The particles start from random initial positions, and are propagated along the flow by stochastic ‘wave processes’ with general statistics and general cross correlations. Setting the initial positions to be Poisson processes, we characterize the classes of Poisson processes that render the particles’ positions—at all times, and invariantly with respect to the wave processes—statistically identical to their initial positions. These Poisson processes are termed ‘super-stable’ and facilitate the generalization of the notion of stationary distributions far beyond the realm of Markov dynamics. (paper)
Super-stable Poissonian structures
Eliazar, Iddo
2012-10-01
In this paper we characterize classes of Poisson processes whose statistical structures are super-stable. We consider a flow generated by a one-dimensional ordinary differential equation, and an ensemble of particles ‘surfing’ the flow. The particles start from random initial positions, and are propagated along the flow by stochastic ‘wave processes’ with general statistics and general cross correlations. Setting the initial positions to be Poisson processes, we characterize the classes of Poisson processes that render the particles’ positions—at all times, and invariantly with respect to the wave processes—statistically identical to their initial positions. These Poisson processes are termed ‘super-stable’ and facilitate the generalization of the notion of stationary distributions far beyond the realm of Markov dynamics.
Maximum Entropy: Clearing up Mysteries
Directory of Open Access Journals (Sweden)
Marian GrendÃƒÂ¡r
2001-04-01
Full Text Available Abstract: There are several mystifications and a couple of mysteries pertinent to MaxEnt. The mystifications, pitfalls and traps are set up mainly by an unfortunate formulation of Jaynes' die problem, the cause cÃƒÂ©lÃƒÂ¨bre of MaxEnt. After discussing the mystifications a new formulation of the problem is proposed. Then we turn to the mysteries. An answer to the recurring question 'Just what are we accomplishing when we maximize entropy?' [8], based on MaxProb rationale of MaxEnt [6], is recalled. A brief view on the other mystery: 'What is the relation between MaxEnt and the Bayesian method?' [9], in light of the MaxProb rationale of MaxEnt suggests that there is not and cannot be a conflict between MaxEnt and Bayes Theorem.
Last Glacial Maximum Salinity Reconstruction
Homola, K.; Spivack, A. J.
2016-12-01
It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were
Harman, Nate
2016-01-01
We consider the following counting problem related to the card game SET: How many $k$-element SET-free sets are there in an $n$-dimensional SET deck? Through a series of algebraic reformulations and reinterpretations, we show the answer to this question satisfies two polynomiality conditions.
Two-dimensional maximum entropy image restoration
International Nuclear Information System (INIS)
Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.
1977-07-01
An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures
Alsing, Justin; Silva, Hector O.; Berti, Emanuele
2018-04-01
We infer the mass distribution of neutron stars in binary systems using a flexible Gaussian mixture model and use Bayesian model selection to explore evidence for multi-modality and a sharp cut-off in the mass distribution. We find overwhelming evidence for a bimodal distribution, in agreement with previous literature, and report for the first time positive evidence for a sharp cut-off at a maximum neutron star mass. We measure the maximum mass to be 2.0M⊙ sharp cut-off is interpreted as the maximum stable neutron star mass allowed by the equation of state of dense matter, our measurement puts constraints on the equation of state. For a set of realistic equations of state that support >2M⊙ neutron stars, our inference of mmax is able to distinguish between models at odds ratios of up to 12: 1, whilst under a flexible piecewise polytropic equation of state model our maximum mass measurement improves constraints on the pressure at 3 - 7 × the nuclear saturation density by ˜30 - 50% compared to simply requiring mmax > 2M⊙. We obtain a lower bound on the maximum sound speed attained inside the neutron star of c_s^max > 0.63c (99.8%), ruling out c_s^max c/√{3} at high significance. Our constraints on the maximum neutron star mass strengthen the case for neutron star-neutron star mergers as the primary source of short gamma-ray bursts.
Rare stable isotopes in meteorites
International Nuclear Information System (INIS)
Wilson, G.C.
1981-01-01
Secondary Ion Mass Spectrometry (SIMS) using accelerators has been applied with success to cosmic ray exposure ages and terrestrial residence times of meteorites by measuring cosmogenic nuclides of Be, Cl, and I. It is proposed to complement this work with experiments on rare stable isotopes, in the hope of setting constraints on the processes of solar nebula/meteoritic formation. The relevant species can be classified as: a) daughter products of extinct nuclides (halflife less than or equal to 2 x 10 8 y) -chronology of the early solar system; b) products of high temperature astrophysical processes - different components incorporated into the solar nebula; and c) products of relatively low temperature processes, stellar winds and cosmic ray reactions - early solar system radiation history. The use of micron-scale primary ion beams will allow detailed sampling of phases within meteorites. Strategies of charge-state selection, molecular disintegration and detection should bring a new set of targets within analytical range. The developing accelerator field is compared to existing (keV energy) ion microprobes
Gravitational Waves and the Maximum Spin Frequency of Neutron Stars
Patruno, A.; Haskell, B.; D'Angelo, C.
2012-01-01
In this paper, we re-examine the idea that gravitational waves are required as a braking mechanism to explain the observed maximum spin frequency of neutron stars. We show that for millisecond X-ray pulsars, the existence of spin equilibrium as set by the disk/magnetosphere interaction is sufficient
Maximum Permissible Concentrations and Negligible Concentrations for pesticides
Crommentuijn T; Kalf DF; Polder MD; Posthumus R; Plassche EJ van de; CSR
1997-01-01
Maximum Permissible Concentrations (MPCs) and Negligible Concentrations (NCs) derived for a series of pesticides are presented in this report. These MPCs and NCs are used by the Ministry of Housing, Spatial Planning and the Environment (VROM) to set Environmental Quality Objectives. For some of the
Maximum power point tracker based on fuzzy logic
International Nuclear Information System (INIS)
Daoud, A.; Midoun, A.
2006-01-01
The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and
Directory of Open Access Journals (Sweden)
Pantelić Svetlana
2014-01-01
Full Text Available The Swedish Royal Academy awarded the 2012 Nobel Prize in Economics to Lloyd Shapley and Alvin Roth, for the theory of stable allocations and the practice of market design. These two American researchers worked independently from each other, combining basic theory and empirical investigations. Through their experiments and practical design they generated a flourishing field of research and improved the performance of many markets. Born in 1923 in Cambridge, Massachusetts, Shapley defended his doctoral thesis at Princeton University in 1953. For many years he worked at RAND, and for more than thirty years he was a professor at UCLA University. He published numerous scientific papers, either by himself or in cooperation with other economists.
Holdener, Fred R.; Boyd, Robert D.
2000-01-01
The present invention is a bi-stable optical actuator device that is depowered in both stable positions. A bearing is used to transfer motion and smoothly transition from one state to another. The optical actuator device may be maintained in a stable position either by gravity or a restraining device.
Maximum Power from a Solar Panel
Directory of Open Access Journals (Sweden)
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Mammographic image restoration using maximum entropy deconvolution
International Nuclear Information System (INIS)
Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R
2004-01-01
An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization
Maximum Margin Clustering of Hyperspectral Data
Niazmardi, S.; Safari, A.; Homayouni, S.
2013-09-01
In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.
Paving the road to maximum productivity.
Holland, C
1998-01-01
"Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.
Ancestral Sequence Reconstruction with Maximum Parsimony.
Herbst, Lina; Fischer, Mareike
2017-12-01
One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.
Automatic sets and Delone sets
International Nuclear Information System (INIS)
Barbe, A; Haeseler, F von
2004-01-01
Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples
Risk following hospitalization in stable chronic systolic heart failure
DEFF Research Database (Denmark)
Abrahamsson, Putte; Swedberg, Karl; Borer, Jeffrey S
2013-01-01
We explored the impact of being hospitalized due to worsening heart failure (WHF) or a myocardial infarction (MI) on subsequent mortality in a large contemporary data set of patients with stable chronic systolic heart failure (HF).......We explored the impact of being hospitalized due to worsening heart failure (WHF) or a myocardial infarction (MI) on subsequent mortality in a large contemporary data set of patients with stable chronic systolic heart failure (HF)....
Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation
Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.
2015-11-01
We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.
Strongly stable real infinitesimally symplectic mappings
Cushman, R.; Kelley, A.
We prove that a mapA εsp(σ,R), the set of infinitesimally symplectic maps, is strongly stable if and only if its centralizerC(A) insp(σ,R) contains only semisimple elements. Using the theorem that everyB insp(σ,R) close toA is conjugate by a real symplectic map to an element ofC(A), we give a new
Vázquez-Guerrero, Jairo; Moras, Gerard; Baeza, Jennifer; Rodríguez-Jiménez, Sergio
2016-01-01
The purpose of the study was to compare the force outputs achieved during a squat exercise using a rotational inertia device in stable versus unstable conditions with different loads and in concentric and eccentric phases. Thirteen male athletes (mean ± SD: age 23.7 ± 3.0 years, height 1.80 ± 0.08 m, body mass 77.4 ± 7.9 kg) were assessed while squatting, performing one set of three repetitions with four different loads under stable and unstable conditions at maximum concentric effort. Overall, there were no significant differences between the stable and unstable conditions at each of the loads for any of the dependent variables. Mean force showed significant differences between some of the loads in stable and unstable conditions (P inertia device allowed the generation of similar force outputs under stable and unstable conditions at each of the four loads. The study also provides empirical evidence of the different force outputs achieved by adjusting load conditions on the rotational inertia device when performing squats, especially in the case of peak force. Concentric force outputs were significantly higher than eccentric outputs, except for peak force under both conditions. These findings support the use of the rotational inertia device to train the squatting exercise under unstable conditions for strength and conditioning trainers. The device could also be included in injury prevention programs for muscle lesions and ankle and knee joint injuries.
Stoll, Robert R
1979-01-01
Set Theory and Logic is the result of a course of lectures for advanced undergraduates, developed at Oberlin College for the purpose of introducing students to the conceptual foundations of mathematics. Mathematics, specifically the real number system, is approached as a unity whose operations can be logically ordered through axioms. One of the most complex and essential of modern mathematical innovations, the theory of sets (crucial to quantum mechanics and other sciences), is introduced in a most careful concept manner, aiming for the maximum in clarity and stimulation for further study in
Maximum permissible voltage of YBCO coated conductors
Energy Technology Data Exchange (ETDEWEB)
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Maximizing the transferred power to electric arc furnace for having maximum production
International Nuclear Information System (INIS)
Samet, Haidar; Ghanbari, Teymoor; Ghaisari, Jafar
2014-01-01
In order to increase production of an EAF (electric arc furnace) by reduction of melting time, one can increase transferred power to the EAF. In other words a certain value of energy can be transferred to the EAF in less time. The transferred power to the EAF reduces when series reactors are utilized in order to have stable arc with desired characteristics. To compensate the reduced transferred power, the secondary voltage of the EAF transformer should be increased by tap changing of the transformer. On the other hand, after any tap changing of the EAF transformer, improved arc stability is degraded. Therefore, the series reactor and EAF transformer tap changing should be simultaneously determined to achieve arc with desired characteristics. In this research, three approaches are proposed to calculate the EAF system parameters, by which the optimal set-points of the different series reactor and EAF transformer taps are determined. The electric characteristics relevant to the EAF for the all transformer and series reactor taps with and without SVC (static VAr compensator) are plotted and based on these graphs the optimal set-points are tabulated. Finally, an economic evaluation is also presented for the methods. - Highlights: • The main goal is to transfer the maximum power to electric arc furnace. • Optimal transformer and series reactor taps are determined. • Arc stability and transferred power to EAF determine the optimal performance. • An economic assessment is done and the number of increased meltings is calculated
Maximum-confidence discrimination among symmetric qudit states
International Nuclear Information System (INIS)
Jimenez, O.; Solis-Prosser, M. A.; Delgado, A.; Neves, L.
2011-01-01
We study the maximum-confidence (MC) measurement strategy for discriminating among nonorthogonal symmetric qudit states. Restricting to linearly dependent and equally likely pure states, we find the optimal positive operator valued measure (POVM) that maximizes our confidence in identifying each state in the set and minimizes the probability of obtaining inconclusive results. The physical realization of this POVM is completely determined and it is shown that after an inconclusive outcome, the input states may be mapped into a new set of equiprobable symmetric states, restricted, however, to a subspace of the original qudit Hilbert space. By applying the MC measurement again onto this new set, we can still gain some information about the input states, although with less confidence than before. This leads us to introduce the concept of sequential maximum-confidence (SMC) measurements, where the optimized MC strategy is iterated in as many stages as allowed by the input set, until no further information can be extracted from an inconclusive result. Within each stage of this measurement our confidence in identifying the input states is the highest possible, although it decreases from one stage to the next. In addition, the more stages we accomplish within the maximum allowed, the higher will be the probability of correct identification. We will discuss an explicit example of the optimal SMC measurement applied in the discrimination among four symmetric qutrit states and propose an optical network to implement it.
Oxidant and solvent stable alkaline protease from Aspergillus flavus ...
African Journals Online (AJOL)
The increase in agricultural practices has necessitated the judicious use of agricultural wastes into value added products. In this study, an extracellular, organic solvent and oxidant stable, serine protease was produced by Aspergillus flavus MTCC 9952 under solid state fermentation. Maximum protease yield was obtained ...
One-dimensional stable distributions
Zolotarev, V M
1986-01-01
This is the first book specifically devoted to a systematic exposition of the essential facts known about the properties of stable distributions. In addition to its main focus on the analytic properties of stable laws, the book also includes examples of the occurrence of stable distributions in applied problems and a chapter on the problem of statistical estimation of the parameters determining stable laws. A valuable feature of the book is the author's use of several formally different ways of expressing characteristic functions corresponding to these laws.
Revealing the Maximum Strength in Nanotwinned Copper
DEFF Research Database (Denmark)
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Modelling maximum canopy conductance and transpiration in ...
African Journals Online (AJOL)
There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...
Stable-isotope paleoclimatology
International Nuclear Information System (INIS)
Deuser, W.G.
1978-01-01
Seasonal variations of temperature and salinity in the surface waters of large parts of the oceans are well established. Available data on seasonal distributions of planktonic foraminifera show that the abundances of different species groups peak at different times of the year with an apparent succession of abundance peaks through most of the year. This evidence suggests that a measure of seasonal contrast is recorded in the isotope ratios of oxygen, and perhaps carbon, in the tests of different foraminiferal species. The evaluation of this potential paleoclimatologic tool awaits planned experiments with recent foraminifera in well-known settings, but a variety of available data is consistent with the idea that interspecies differences in 18 O content contain a seasonal component.(auth.)
Einstein-Dirac theory in spin maximum I
International Nuclear Information System (INIS)
Crumeyrolle, A.
1975-01-01
An unitary Einstein-Dirac theory, first in spin maximum 1, is constructed. An original feature of this article is that it is written without any tetrapod technics; basic notions and existence conditions for spinor structures on pseudo-Riemannian fibre bundles are only used. A coupling gravitation-electromagnetic field is pointed out, in the geometric setting of the tangent bundle over space-time. Generalized Maxwell equations for inductive media in presence of gravitational field are obtained. Enlarged Einstein-Schroedinger theory, gives a particular case of this E.D. theory. E. S. theory is a truncated E.D. theory in spin maximum 1. A close relation between torsion-vector and Schroedinger's potential exists and nullity of torsion-vector has a spinor meaning. Finally the Petiau-Duffin-Kemmer theory is incorporated in this geometric setting [fr
Algorithms of maximum likelihood data clustering with applications
Giada, Lorenzo; Marsili, Matteo
2002-12-01
We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.
Maximum credible accident analysis for TR-2 reactor conceptual design
International Nuclear Information System (INIS)
Manopulo, E.
1981-01-01
A new reactor, TR-2, of 5 MW, designed in cooperation with CEN/GRENOBLE is under construction in the open pool of TR-1 reactor of 1 MW set up by AMF atomics at the Cekmece Nuclear Research and Training Center. In this report the fission product inventory and doses released after the maximum credible accident have been studied. The diffusion of the gaseous fission products to the environment and the potential radiation risks to the population have been evaluated
PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation
Energy Technology Data Exchange (ETDEWEB)
Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.
2007-06-23
In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.
The calculation of maximum permissible exposure levels for laser radiation
International Nuclear Information System (INIS)
Tozer, B.A.
1979-01-01
The maximum permissible exposure data of the revised standard BS 4803 are presented as a set of decision charts which ensure that the user automatically takes into account such details as pulse length and pulse pattern, limiting angular subtense, combinations of multiple wavelength and/or multiple pulse lengths, etc. The two decision charts given are for the calculation of radiation hazards to skin and eye respectively. (author)
Stable isotope research pool inventory
International Nuclear Information System (INIS)
1980-12-01
This report contains a listing of electromagnetically separated stable isotopes which are available for distribution within the United States for non-destructive research use from the Oak Ridge National Laboratory on a loan basis. This inventory includes all samples of stable isotopes in the Materials Research Collection and does not designate whether a sample is out on loan or in reprocessing
Russian ElectroKhimPribor integrated plant - producer and supplier of enriched stable isotopes
International Nuclear Information System (INIS)
Tatarinov, A.N.; Polyakov, L.A.
1997-01-01
Russian ElectroKhimPribor Integrated Plant, as well as ORNL, is a leading production which manufactures and supplied to the world market such specific products as stable isotopes. More than 200 isotopes of 44 elements can be obtained at its electromagnetic separator. Changes being underway for a few last years in Russia affected production and distribution of stable isotopes. There arose a necessity in a new approach to handling work in this field so as to create favourable conditions for both producers and customers. As a result, positive changes in calutron operation at ElectroKhimPribor has been reached; quality management system covering all stages of production has been set up; large and attractive stock of isotopes has been created; prospective scientific isotope-based developments are taken into account when planning separation F campaigns; executing the contracts is guaranteed; business philosophy has been changed to meet maximum of customer needs. For more than forty years ElectroKhimPribor have had no claim from customers as to quality of products or implementing contracts. Supplying enriched stable isotopes virtually to all the world's leading customers, ElectroKhimPribor cooperates successfully with Canadian company Trace Science since 1996
MXLKID: a maximum likelihood parameter identifier
International Nuclear Information System (INIS)
Gavel, D.T.
1980-07-01
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables
Maximum neutron flux in thermal reactors
International Nuclear Information System (INIS)
Strugar, P.V.
1968-12-01
Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples
Maximum allowable load on wheeled mobile manipulators
International Nuclear Information System (INIS)
Habibnejad Korayem, M.; Ghariblu, H.
2003-01-01
This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy
Maximum phytoplankton concentrations in the sea
DEFF Research Database (Denmark)
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
The Nature of Stable Insomnia Phenotypes
Pillai, Vivek; Roth, Thomas; Drake, Christopher L.
2015-01-01
Study Objectives: We examined the 1-y stability of four insomnia symptom profiles: sleep onset insomnia; sleep maintenance insomnia; combined onset and maintenance insomnia; and neither criterion (i.e., insomnia cases that do not meet quantitative thresholds for onset or maintenance problems). Insomnia cases that exhibited the same symptom profile over a 1-y period were considered to be phenotypes, and were compared in terms of clinical and demographic characteristics. Design: Longitudinal. Setting: Urban, community-based. Participants: Nine hundred fifty-four adults with Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition based current insomnia (46.6 ± 12.6 y; 69.4% female). Interventions: None. Measurements and results: At baseline, participants were divided into four symptom profile groups based on quantitative criteria. Follow-up assessment 1 y later revealed that approximately 60% of participants retained the same symptom profile, and were hence judged to be phenotypes. Stability varied significantly by phenotype, such that sleep onset insomnia (SOI) was the least stable (42%), whereas combined insomnia (CI) was the most stable (69%). Baseline symptom groups (cross-sectionally defined) differed significantly across various clinical indices, including daytime impairment, depression, and anxiety. Importantly, however, a comparison of stable phenotypes (longitudinally defined) did not reveal any differences in impairment or comorbid psychopathology. Another interesting finding was that whereas all other insomnia phenotypes showed evidence of an elevated wake drive both at night and during the day, the “neither criterion” phenotype did not; this latter phenotype exhibited significantly higher daytime sleepiness despite subthreshold onset and maintenance difficulties. Conclusions: By adopting a stringent, stability-based definition, this study offers timely and important data on the longitudinal trajectory of specific insomnia phenotypes. With
Stable configurations in social networks
Bronski, Jared C.; DeVille, Lee; Ferguson, Timothy; Livesay, Michael
2018-06-01
We present and analyze a model of opinion formation on an arbitrary network whose dynamics comes from a global energy function. We study the global and local minimizers of this energy, which we call stable opinion configurations, and describe the global minimizers under certain assumptions on the friendship graph. We show a surprising result that the number of stable configurations is not necessarily monotone in the strength of connection in the social network, i.e. the model sometimes supports more stable configurations when the interpersonal connections are made stronger.
Development of Stable Isotope Technology
International Nuclear Information System (INIS)
Jeong, Do Young; Kim, Cheol Jung; Han, Jae Min
2009-03-01
KAERI has obtained an advanced technology with singular originality for laser stable isotope separation. Objectives for this project are to get production technology of Tl-203 stable isotope used for medical application and are to establish the foundation of the pilot system, while we are taking aim at 'Laser Isotope Separation Technology to make resistance to the nuclear proliferation'. And we will contribute to ensuring a nuclear transparency in the world society by taking part in a practical group of NSG and being collaboration with various international groups related to stable isotope separation technology
Directory of Open Access Journals (Sweden)
Arbutina Bojan
2011-01-01
Full Text Available AM CVn-type stars and ultra-compact X-ray binaries are extremely interesting semi-detached close binary systems in which the Roche lobe filling component is a white dwarf transferring mass to another white dwarf, neutron star or a black hole. Earlier theoretical considerations show that there is a maximum mass ratio of AM CVn-type binary systems (qmax ≈ 2/3 below which the mass transfer is stable. In this paper we derive slightly different value for qmax and more interestingly, by applying the same procedure, we find the maximum expected white dwarf mass in ultra-compact X-ray binaries.
Multivariate max-stable spatial processes
Genton, Marc G.; Padoan, S. A.; Sang, H.
2015-01-01
Max-stable processes allow the spatial dependence of extremes to be modelled and quantified, so they are widely adopted in applications. For a better understanding of extremes, it may be useful to study several variables simultaneously. To this end, we study the maxima of independent replicates of multivariate processes, both in the Gaussian and Student-t cases. We define a Poisson process construction and introduce multivariate versions of the Smith Gaussian extreme-value, the Schlather extremal-Gaussian and extremal-t, and the Brown–Resnick models. We develop inference for the models based on composite likelihoods. We present results of Monte Carlo simulations and an application to daily maximum wind speed and wind gust.
Multivariate max-stable spatial processes
Genton, Marc G.
2015-02-11
Max-stable processes allow the spatial dependence of extremes to be modelled and quantified, so they are widely adopted in applications. For a better understanding of extremes, it may be useful to study several variables simultaneously. To this end, we study the maxima of independent replicates of multivariate processes, both in the Gaussian and Student-t cases. We define a Poisson process construction and introduce multivariate versions of the Smith Gaussian extreme-value, the Schlather extremal-Gaussian and extremal-t, and the Brown–Resnick models. We develop inference for the models based on composite likelihoods. We present results of Monte Carlo simulations and an application to daily maximum wind speed and wind gust.
Stable isotope research pool inventory
International Nuclear Information System (INIS)
1984-03-01
This report contains a listing of electromagnetically separated stable isotopes which are available at the Oak Ridge National Laboratory for distribution for nondestructive research use on a loan basis. This inventory includes all samples of stable isotopes in the Research Materials Collection and does not designate whether a sample is out on loan or is in reprocessing. For some of the high abundance naturally occurring isotopes, larger amounts can be made available; for example, Ca-40 and Fe-56
French days on stable isotopes
International Nuclear Information System (INIS)
2000-01-01
These first French days on stable isotopes took place in parallel with the 1. French days of environmental chemistry. Both conferences had common plenary sessions. The conference covers all aspects of the use of stable isotopes in the following domains: medicine, biology, environment, tracer techniques, agronomy, food industry, geology, petroleum geochemistry, cosmo-geochemistry, archaeology, bio-geochemistry, hydrology, climatology, nuclear and particle physics, astrophysics, isotope separations etc.. Abstracts available on CD-Rom only. (J.S.)
Stable isotope research pool inventory
International Nuclear Information System (INIS)
1982-01-01
This report contains a listing of electromagnetically separated stable isotopes which are available for distribution within the United States for nondestructive research use from the Oak Ridge National Laboratory on a loan basis. This inventory includes all samples of stable isotopes in the Material Research Collection and does not designate whether a sample is out on loan or in reprocessing. For some of the high abundance naturally occurring isotopes, larger amounts can be made available; for example, Ca-40 and Fe-56
Pharmaceuticals labelled with stable isotopes
International Nuclear Information System (INIS)
Krumbiegel, P.
1986-11-01
The relatively new field of pharmaceuticals labelled with stable isotopes is reviewed. Scientific, juridical, and ethical questions are discussed concerning the application of these pharmaceuticals in human medicine. 13 C, 15 N, and 2 H are the stable isotopes mainly utilized in metabolic function tests. Methodical contributions are given to the application of 2 H, 13 C, and 15 N pharmaceuticals showing new aspects and different states of development in the field under discussion. (author)
Direct maximum parsimony phylogeny reconstruction from genotype data.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2007-12-05
Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
Maximum Entropy Production Is Not a Steady State Attractor for 2D Fluid Convection
Directory of Open Access Journals (Sweden)
Stuart Bartlett
2016-12-01
Full Text Available Multiple authors have claimed that the natural convection of a fluid is a process that exhibits maximum entropy production (MEP. However, almost all such investigations were limited to fixed temperature boundary conditions (BCs. It was found that under those conditions, the system tends to maximize its heat flux, and hence it was concluded that the MEP state is a dynamical attractor. However, since entropy production varies with heat flux and difference of inverse temperature, it is essential that any complete investigation of entropy production allows for variations in heat flux and temperature difference. Only then can we legitimately assess whether the MEP state is the most attractive. Our previous work made use of negative feedback BCs to explore this possibility. We found that the steady state of the system was far from the MEP state. For any system, entropy production can only be maximized subject to a finite set of physical and material constraints. In the case of our previous work, it was possible that the adopted set of fluid parameters were constraining the system in such a way that it was entirely prevented from reaching the MEP state. Hence, in the present work, we used a different set of boundary parameters, such that the steady states of the system were in the local vicinity of the MEP state. If MEP was indeed an attractor, relaxing those constraints of our previous work should have caused a discrete perturbation to the surface of steady state heat flux values near the value corresponding to MEP. We found no such perturbation, and hence no discernible attraction to the MEP state. Furthermore, systems with fixed flux BCs actually minimize their entropy production (relative to the alternative stable state, that of pure diffusive heat transport. This leads us to conclude that the principle of MEP is not an accurate indicator of which stable steady state a convective system will adopt. However, for all BCs considered, the quotient of
PTree: pattern-based, stochastic search for maximum parsimony phylogenies
Directory of Open Access Journals (Sweden)
Ivan Gregor
2013-06-01
Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.
PTree: pattern-based, stochastic search for maximum parsimony phylogenies.
Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C
2013-01-01
Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.
A Stochastic Maximum Principle for General Mean-Field Systems
International Nuclear Information System (INIS)
Buckdahn, Rainer; Li, Juan; Ma, Jin
2016-01-01
In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.
A Stochastic Maximum Principle for General Mean-Field Systems
Energy Technology Data Exchange (ETDEWEB)
Buckdahn, Rainer, E-mail: Rainer.Buckdahn@univ-brest.fr [Université de Bretagne-Occidentale, Département de Mathématiques (France); Li, Juan, E-mail: juanli@sdu.edu.cn [Shandong University, Weihai, School of Mathematics and Statistics (China); Ma, Jin, E-mail: jinma@usc.edu [University of Southern California, Department of Mathematics (United States)
2016-12-15
In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.
Maximum gravitational redshift of white dwarfs
International Nuclear Information System (INIS)
Shapiro, S.L.; Teukolsky, S.A.
1976-01-01
The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores
The maximum economic depth of groundwater abstraction for irrigation
Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.
2017-12-01
Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of
Directory of Open Access Journals (Sweden)
Jairo Vázquez-Guerrero
Full Text Available The purpose of the study was to compare the force outputs achieved during a squat exercise using a rotational inertia device in stable versus unstable conditions with different loads and in concentric and eccentric phases. Thirteen male athletes (mean ± SD: age 23.7 ± 3.0 years, height 1.80 ± 0.08 m, body mass 77.4 ± 7.9 kg were assessed while squatting, performing one set of three repetitions with four different loads under stable and unstable conditions at maximum concentric effort. Overall, there were no significant differences between the stable and unstable conditions at each of the loads for any of the dependent variables. Mean force showed significant differences between some of the loads in stable and unstable conditions (P < 0.010 and peak force output differed between all loads for each condition (P < 0.045. Mean force outputs were greater in the concentric than in the eccentric phase under both conditions and with all loads (P < 0.001. There were no significant differences in peak force between concentric and eccentric phases at any load in either stable or unstable conditions. In conclusion, squatting with a rotational inertia device allowed the generation of similar force outputs under stable and unstable conditions at each of the four loads. The study also provides empirical evidence of the different force outputs achieved by adjusting load conditions on the rotational inertia device when performing squats, especially in the case of peak force. Concentric force outputs were significantly higher than eccentric outputs, except for peak force under both conditions. These findings support the use of the rotational inertia device to train the squatting exercise under unstable conditions for strength and conditioning trainers. The device could also be included in injury prevention programs for muscle lesions and ankle and knee joint injuries.
Maximum entropy analysis of EGRET data
DEFF Research Database (Denmark)
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
The Maximum Resource Bin Packing Problem
DEFF Research Database (Denmark)
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Shower maximum detector for SDC calorimetry
International Nuclear Information System (INIS)
Ernwein, J.
1994-01-01
A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs
Topics in Bayesian statistics and maximum entropy
International Nuclear Information System (INIS)
Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.
1998-12-01
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
49 CFR Appendix B to Part 386 - Penalty Schedule; Violations and Maximum Civil Penalties
2010-10-01
... Maximum Civil Penalties The Debt Collection Improvement Act of 1996 [Public Law 104-134, title III... civil penalties set out in paragraphs (e)(1) through (4) of this appendix results in death, serious... 49 Transportation 5 2010-10-01 2010-10-01 false Penalty Schedule; Violations and Maximum Civil...
Probable Maximum Earthquake Magnitudes for the Cascadia Subduction
Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.
2013-12-01
The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc
Maximum power point tracker for photovoltaic power plants
Arcidiacono, V.; Corsi, S.; Lambri, L.
The paper describes two different closed-loop control criteria for the maximum power point tracking of the voltage-current characteristic of a photovoltaic generator. The two criteria are discussed and compared, inter alia, with regard to the setting-up problems that they pose. Although a detailed analysis is not embarked upon, the paper also provides some quantitative information on the energy advantages obtained by using electronic maximum power point tracking systems, as compared with the situation in which the point of operation of the photovoltaic generator is not controlled at all. Lastly, the paper presents two high-efficiency MPPT converters for experimental photovoltaic plants of the stand-alone and the grid-interconnected type.
Jarzynski equality in the context of maximum path entropy
González, Diego; Davis, Sergio
2017-06-01
In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.
Rumor Identification with Maximum Entropy in MicroNet
Directory of Open Access Journals (Sweden)
Suisheng Yu
2017-01-01
Full Text Available The widely used applications of Microblog, WeChat, and other social networking platforms (that we call MicroNet shorten the period of information dissemination and expand the range of information dissemination, which allows rumors to cause greater harm and have more influence. A hot topic in the information dissemination field is how to identify and block rumors. Based on the maximum entropy model, this paper constructs the recognition mechanism of rumor information in the micronetwork environment. First, based on the information entropy theory, we obtained the characteristics of rumor information using the maximum entropy model. Next, we optimized the original classifier training set and the feature function to divide the information into rumors and nonrumors. Finally, the experimental simulation results show that the rumor identification results using this method are better than the original classifier and other related classification methods.
Maximum Entropy Closure of Balance Equations for Miniband Semiconductor Superlattices
Directory of Open Access Journals (Sweden)
Luis L. Bonilla
2016-07-01
Full Text Available Charge transport in nanosized electronic systems is described by semiclassical or quantum kinetic equations that are often costly to solve numerically and difficult to reduce systematically to macroscopic balance equations for densities, currents, temperatures and other moments of macroscopic variables. The maximum entropy principle can be used to close the system of equations for the moments but its accuracy or range of validity are not always clear. In this paper, we compare numerical solutions of balance equations for nonlinear electron transport in semiconductor superlattices. The equations have been obtained from Boltzmann–Poisson kinetic equations very far from equilibrium for strong fields, either by the maximum entropy principle or by a systematic Chapman–Enskog perturbation procedure. Both approaches produce the same current-voltage characteristic curve for uniform fields. When the superlattices are DC voltage biased in a region where there are stable time periodic solutions corresponding to recycling and motion of electric field pulses, the differences between the numerical solutions produced by numerically solving both types of balance equations are smaller than the expansion parameter used in the perturbation procedure. These results and possible new research venues are discussed.
Nonsymmetric entropy and maximum nonsymmetric entropy principle
International Nuclear Information System (INIS)
Liu Chengshi
2009-01-01
Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.
Maximum speed of dewetting on a fiber
Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus
2011-01-01
A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed
Maximum gain of Yagi-Uda arrays
DEFF Research Database (Denmark)
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
correlation between maximum dry density and cohesion
African Journals Online (AJOL)
HOD
represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
The maximum-entropy method in superspace
Czech Academy of Sciences Publication Activity Database
van Smaalen, S.; Palatinus, Lukáš; Schneider, M.
2003-01-01
Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003
Achieving maximum sustainable yield in mixed fisheries
Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna
2017-01-01
Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example
5 CFR 534.203 - Maximum stipends.
2010-01-01
... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...
Minimal length, Friedmann equations and maximum density
Energy Technology Data Exchange (ETDEWEB)
Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)
2014-06-16
Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.
Tsallis distribution as a standard maximum entropy solution with 'tail' constraint
International Nuclear Information System (INIS)
Bercher, J.-F.
2008-01-01
We show that Tsallis' distributions can be derived from the standard (Shannon) maximum entropy setting, by incorporating a constraint on the divergence between the distribution and another distribution imagined as its tail. In this setting, we find an underlying entropy which is the Renyi entropy. Furthermore, escort distributions and generalized means appear as a direct consequence of the construction. Finally, the 'maximum entropy tail distribution' is identified as a Generalized Pareto Distribution
MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.
Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang
2018-02-02
The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .
Multivariate Max-Stable Spatial Processes
Genton, Marc G.
2014-01-06
Analysis of spatial extremes is currently based on univariate processes. Max-stable processes allow the spatial dependence of extremes to be modelled and explicitly quantified, they are therefore widely adopted in applications. For a better understanding of extreme events of real processes, such as environmental phenomena, it may be useful to study several spatial variables simultaneously. To this end, we extend some theoretical results and applications of max-stable processes to the multivariate setting to analyze extreme events of several variables observed across space. In particular, we study the maxima of independent replicates of multivariate processes, both in the Gaussian and Student-t cases. Then, we define a Poisson process construction in the multivariate setting and introduce multivariate versions of the Smith Gaussian extremevalue, the Schlather extremal-Gaussian and extremal-t, and the BrownResnick models. Inferential aspects of those models based on composite likelihoods are developed. We present results of various Monte Carlo simulations and of an application to a dataset of summer daily temperature maxima and minima in Oklahoma, U.S.A., highlighting the utility of working with multivariate models in contrast to the univariate case. Based on joint work with Simone Padoan and Huiyan Sang.
Multivariate Max-Stable Spatial Processes
Genton, Marc G.
2014-01-01
Analysis of spatial extremes is currently based on univariate processes. Max-stable processes allow the spatial dependence of extremes to be modelled and explicitly quantified, they are therefore widely adopted in applications. For a better understanding of extreme events of real processes, such as environmental phenomena, it may be useful to study several spatial variables simultaneously. To this end, we extend some theoretical results and applications of max-stable processes to the multivariate setting to analyze extreme events of several variables observed across space. In particular, we study the maxima of independent replicates of multivariate processes, both in the Gaussian and Student-t cases. Then, we define a Poisson process construction in the multivariate setting and introduce multivariate versions of the Smith Gaussian extremevalue, the Schlather extremal-Gaussian and extremal-t, and the BrownResnick models. Inferential aspects of those models based on composite likelihoods are developed. We present results of various Monte Carlo simulations and of an application to a dataset of summer daily temperature maxima and minima in Oklahoma, U.S.A., highlighting the utility of working with multivariate models in contrast to the univariate case. Based on joint work with Simone Padoan and Huiyan Sang.
Twenty-five years of maximum-entropy principle
Kapur, J. N.
1983-04-01
The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.
Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution
Directory of Open Access Journals (Sweden)
Hare Krishna
2017-01-01
Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.
The Ultraviolet Spectrometer and Polarimeter on the Solar Maximum Mission
Woodgate, B. E.; Brandt, J. C.; Kalet, M. W.; Kenny, P. J.; Tandberg-Hanssen, E. A.; Bruner, E. C.; Beckers, J. M.; Henze, W.; Knox, E. D.; Hyder, C. L.
1980-01-01
The Ultraviolet Spectrometer and Polarimeter (UVSP) on the Solar Maximum Mission spacecraft is described, including the experiment objectives, system design, performance, and modes of operation. The instrument operates in the wavelength range 1150-3600 A with better than 2 arcsec spatial resolution, raster range 256 x 256 sq arcsec, and 20 mA spectral resolution in second order. Observations can be made with specific sets of four lines simultaneously, or with both sides of two lines simultaneously for velocity and polarization. A rotatable retarder can be inserted into the spectrometer beam for measurement of Zeeman splitting and linear polarization in the transition region and chromosphere.
The ultraviolet spectrometer and polarimeter on the solar maximum mission
International Nuclear Information System (INIS)
Woodgate, B.E.; Brandt, J.C.; Kalet, M.W.; Kenny, P.J.; Beckers, J.M.; Henze, W.; Hyder, C.L.; Knox, E.D.
1980-01-01
The Ultraviolet Spectrometer and Polarimeter (UVSP) on the Solar Maximum Mission spacecraft is described, including the experiment objectives, system design. performance, and modes of operation. The instrument operates in the wavelength range 1150-3600 Angstreom with better than 2 arc sec spatial resolution, raster range 256 x 256 arc sec 2 , and 20 m Angstroem spectral resolution in second order. Observations can be made with specific sets of 4 lines simultaneously, or with both sides of 2 lines simultaneously for velocity and polarization. A rotatable retarder can be inserted into the spectrometer beam for measurement of Zeeman splitting and linear polarization in the transition region and chromosphere. (orig.)
Parameter estimation of sub-Gaussian stable distributions
Czech Academy of Sciences Publication Activity Database
Omelchenko, Vadym
2014-01-01
Roč. 50, č. 6 (2014), s. 929-949 ISSN 0023-5954 R&D Projects: GA ČR GA13-14445S Institutional support: RVO:67985556 Keywords : stable distribution * sub-Gaussian distribution * maximum likelihood Subject RIV: AH - Economics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/omelchenko-0439707.pdf
On the likelihood function of Gaussian max-stable processes
Genton, M. G.; Ma, Y.; Sang, H.
2011-01-01
We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.
On the likelihood function of Gaussian max-stable processes
Genton, M. G.
2011-05-24
We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.
[Current Treatment of Stable Angina].
Toggweiler, Stefan; Jamshidi, Peiman; Cuculi, Florim
2015-06-17
Current therapy for stable angina includes surgical and percutaneous revascularization, which has been improved tremendously over the last decades. Smoking cessation and regular exercise are the cornerstone for prevention of further cerebrovascular events. Medical treatment includes treatment of cardiovascular risk factors and antithrombotic management, which can be a challenge in some patients. Owing to the fact the coronary revascularization is readily accessible these days in many industrialized countries, the importance of antianginal therapy has decreased over the past years. This article presents a contemporary overview of the management of patients with stable angina in the year 2015.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Wheel sets. 229.73 Section 229.73 Transportation... TRANSPORTATION RAILROAD LOCOMOTIVE SAFETY STANDARDS Safety Requirements Suspension System § 229.73 Wheel sets. (a...) when applied or turned. (b) The maximum variation in the diameter between any two wheel sets in a three...
International Nuclear Information System (INIS)
1991-01-01
The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de
2010-07-27
...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...
Zipf's law, power laws and maximum entropy
International Nuclear Information System (INIS)
Visser, Matt
2013-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum likelihood estimation for integrated diffusion processes
DEFF Research Database (Denmark)
Baltazar-Larios, Fernando; Sørensen, Michael
We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum parsimony on subsets of taxa.
Fischer, Mareike; Thatte, Bhalchandra D
2009-09-21
In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.
Maximum entropy analysis of liquid diffraction data
International Nuclear Information System (INIS)
Root, J.H.; Egelstaff, P.A.; Nickel, B.G.
1986-01-01
A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)
Automatic maximum entropy spectral reconstruction in NMR
International Nuclear Information System (INIS)
Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.
2007-01-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system
maximum neutron flux at thermal nuclear reactors
International Nuclear Information System (INIS)
Strugar, P.
1968-10-01
Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr
International Nuclear Information System (INIS)
Liu, Yao; Chen, Yuehua; Tan, Kezhu; Xie, Hong; Wang, Liguo; Xie, Wu; Yan, Xiaozhen; Xu, Zhen
2016-01-01
Band selection is considered to be an important processing step in handling hyperspectral data. In this work, we selected informative bands according to the maximal relevance minimal redundancy (MRMR) criterion based on neighborhood mutual information. Two measures MRMR difference and MRMR quotient were defined and a forward greedy search for band selection was constructed. The performance of the proposed algorithm, along with a comparison with other methods (neighborhood dependency measure based algorithm, genetic algorithm and uninformative variable elimination algorithm), was studied using the classification accuracy of extreme learning machine (ELM) and random forests (RF) classifiers on soybeans’ hyperspectral datasets. The results show that the proposed MRMR algorithm leads to promising improvement in band selection and classification accuracy. (paper)
Setting maximum sustainable yield targets when yield of one species affects that of other species
DEFF Research Database (Denmark)
Rindorf, Anna; Reid, David; Mackinson, Steve
2012-01-01
species. But how should we prioritize and identify most appropriate targets? Do we prefer to maximize by focusing on total yield in biomass across species, or are other measures targeting maximization of profits or preserving high living qualities more relevant? And how do we ensure that targets remain...
Possibility of stable quark stars
International Nuclear Information System (INIS)
Bowers, R.L.; Gleeson, A.M.; Pedigo, R.D.
1976-08-01
A recent zero temperature equation of state which contains quark-partons separated from conventional baryons by a phase transition is used to investigate the stability of quark stars. The sensitivity to the input physics is also considered. The conclusions, which are found to be relatively model independent, indicate that a separately identifiable class of stable objects called quark stars does not exist
Radiation-stable polyolefin compositions
International Nuclear Information System (INIS)
Rekers, J.W.
1986-01-01
This invention relates to compositions of olefinic polymers suitable for high energy radiation treatment. In particular, the invention relates to olefinic polymer compositions that are stable to sterilizing dosages of high energy radiation such as a gamma radiation. Stabilizers are described that include benzhydrol and benzhydrol derivatives; these stabilizers may be used alone or in combination with secondary antioxidants or synergists
Mid-depth temperature maximum in an estuarine lake
Stepanenko, V. M.; Repina, I. A.; Artamonov, A. Yu; Gorin, S. L.; Lykossov, V. N.; Kulyamin, D. V.
2018-03-01
The mid-depth temperature maximum (TeM) was measured in an estuarine Bol’shoi Vilyui Lake (Kamchatka peninsula, Russia) in summer 2015. We applied 1D k-ɛ model LAKE to the case, and found it successfully simulating the phenomenon. We argue that the main prerequisite for mid-depth TeM development is a salinity increase below the freshwater mixed layer, sharp enough in order to increase the temperature with depth not to cause convective mixing and double diffusion there. Given that this condition is satisfied, the TeM magnitude is controlled by physical factors which we identified as: radiation absorption below the mixed layer, mixed-layer temperature dynamics, vertical heat conduction and water-sediments heat exchange. In addition to these, we formulate the mechanism of temperature maximum ‘pumping’, resulting from the phase shift between diurnal cycles of mixed-layer depth and temperature maximum magnitude. Based on the LAKE model results we quantify the contribution of the above listed mechanisms and find their individual significance highly sensitive to water turbidity. Relying on physical mechanisms identified we define environmental conditions favouring the summertime TeM development in salinity-stratified lakes as: small-mixed layer depth (roughly, ~wind and cloudless weather. We exemplify the effect of mixed-layer depth on TeM by a set of selected lakes.
Direct maximum parsimony phylogeny reconstruction from genotype data
Directory of Open Access Journals (Sweden)
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
Maximum entropy decomposition of quadrupole mass spectra
International Nuclear Information System (INIS)
Toussaint, U. von; Dose, V.; Golan, A.
2004-01-01
We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast
Maximum power operation of interacting molecular motors
DEFF Research Database (Denmark)
Golubeva, Natalia; Imparato, Alberto
2013-01-01
, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...
On the maximum drawdown during speculative bubbles
Rotundo, Giulia; Navarra, Mauro
2007-08-01
A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.
Multi-Channel Maximum Likelihood Pitch Estimation
DEFF Research Database (Denmark)
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Conductivity maximum in a charged colloidal suspension
Energy Technology Data Exchange (ETDEWEB)
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Dynamical maximum entropy approach to flocking.
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Multiperiod Maximum Loss is time unit invariant.
Kovacevic, Raimund M; Breuer, Thomas
2016-01-01
Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Improved Maximum Parsimony Models for Phylogenetic Networks.
Van Iersel, Leo; Jones, Mark; Scornavacca, Celine
2018-05-01
Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.
Every plane graph of maximum degree 8 has an edge-face 9-colouring.
R.J. Kang (Ross); J.-S. Sereni; M. Stehlík
2011-01-01
textabstractAn edge-face coloring of a plane graph with edge set $E$ and face set $F$ is a coloring of the elements of $E\\cup F$ such that adjacent or incident elements receive different colors. Borodin proved that every plane graph of maximum degree $\\Delta \\ge 10$ can be edge-face colored with
Sun, Fengxin; Wang, Jufeng; Cheng, Rongjun; Ge, Hongxia
2018-02-01
The optimal driving speeds of the different vehicles may be different for the same headway. In the optimal velocity function of the optimal velocity (OV) model, the maximum speed vmax is an important parameter determining the optimal driving speed. A vehicle with higher maximum speed is more willing to drive faster than that with lower maximum speed in similar situation. By incorporating the anticipation driving behavior of relative velocity and mixed maximum speeds of different percentages into optimal velocity function, an extended heterogeneous car-following model is presented in this paper. The analytical linear stable condition for this extended heterogeneous traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulted from the cooperation between anticipation driving behavior and heterogeneous maximum speeds in the optimal velocity function. The analytical and numerical results all demonstrate that strengthening driver's anticipation effect can improve the stability of heterogeneous traffic flow, and increasing the lowest value in the mixed maximum speeds will result in more instability, but increasing the value or proportion of the part already having higher maximum speed will cause different stabilities at high or low traffic densities.
Toward Practical Secure Stable Matching
Directory of Open Access Journals (Sweden)
Riazi M. Sadegh
2017-01-01
Full Text Available The Stable Matching (SM algorithm has been deployed in many real-world scenarios including the National Residency Matching Program (NRMP and financial applications such as matching of suppliers and consumers in capital markets. Since these applications typically involve highly sensitive information such as the underlying preference lists, their current implementations rely on trusted third parties. This paper introduces the first provably secure and scalable implementation of SM based on Yao’s garbled circuit protocol and Oblivious RAM (ORAM. Our scheme can securely compute a stable match for 8k pairs four orders of magnitude faster than the previously best known method. We achieve this by introducing a compact and efficient sub-linear size circuit. We even further decrease the computation cost by three orders of magnitude by proposing a novel technique to avoid unnecessary iterations in the SM algorithm. We evaluate our implementation for several problem sizes and plan to publish it as open-source.
Stable isotope research pool inventory
International Nuclear Information System (INIS)
1986-08-01
This report contains a listing of electromagnetically separated stable isotopes which are available at the Oak Ridge National Laboratory for distribution for nondestructive research use on a loan basis. This inventory includes all samples of stable isotopes in the Research Materials Collection and does not designate whether a sample is out on loan or is in reprocessing. For some of the high-abundance, naturally occurring isotopes, larger amounts can be made available; for example, Ca-40 and Fe-56. All requests for the loan of samples should be submitted with a summary of the purpose of the loan to: Iotope Distribution Office, Oak Ridge National Laboratory, P.O. Box X, Oak Ridge, Tennessee 37831. Requests from non-DOE contractors and from foreign institutions require DOE approval
Stable isotopes and the environment
International Nuclear Information System (INIS)
Krouse, H.R.
1990-01-01
Whereas traditionally, stable isotope research has been directed towards resource exploration and development, it is finding more frequent applications in helping to assess the impacts of resource utilization upon ecosystems. Among the many pursuits, two themes are evident: tracing the transport and conversions of pollutants in the environment and better understanding of the interplay among environmental receptors, e.g. food web studies. Stable isotope data are used primarily to identify the presence of pollutants in the environment and with a few exceptions, the consequence of their presence must be assessed by other techniques. Increasing attention has been given to the isotopic composition of humans with many potential applications in areas such as paleodiets, medicine, and criminology. In this brief overview examples are used from the Pacific Rim to illustrate the above concepts. 26 refs., 1 tab., 3 figs
Towards stable acceleration in LINACS
Dubrovskiy, A D
2014-01-01
Ultra-stable and -reproducible high-energy particle beams with short bunches are needed in novel linear accelerators and, in particular, in the Compact Linear Collider CLIC. A passive beam phase stabilization system based on a bunch compression with a negative transfer matrix element R56 and acceleration at a positive off-crest phase is proposed. The motivation and expected advantages of the proposed scheme are outlined.
Stable Structures for Distributed Applications
Eugen DUMITRASCU; Ion IVAN
2008-01-01
For distributed applications, we define the linear, tree and graph structure types with different variants and modalities to aggregate them. The distributed applications have assigned structures that through their characteristics influence the costs of stages for developing cycle and the costs for exploitation, transferred to each user. We also present the quality characteristics of a structure for a stable application, which is focused on stability characteristic. For that characteristic we ...
Objective Bayesianism and the Maximum Entropy Principle
Directory of Open Access Journals (Sweden)
Jon Williamson
2013-09-01
Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.
Efficient heuristics for maximum common substructure search.
Englert, Péter; Kovács, Péter
2015-05-26
Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.
Hydraulic Limits on Maximum Plant Transpiration
Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.
2011-12-01
Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water
Maximum entropy principle and hydrodynamic models in statistical mechanics
International Nuclear Information System (INIS)
Trovato, M.; Reggiani, L.
2012-01-01
This review presents the state of the art of the maximum entropy principle (MEP) in its classical and quantum (QMEP) formulation. Within the classical MEP we overview a general theory able to provide, in a dynamical context, the macroscopic relevant variables for carrier transport in the presence of electric fields of arbitrary strength. For the macroscopic variables the linearized maximum entropy approach is developed including full-band effects within a total energy scheme. Under spatially homogeneous conditions, we construct a closed set of hydrodynamic equations for the small-signal (dynamic) response of the macroscopic variables. The coupling between the driving field and the energy dissipation is analyzed quantitatively by using an arbitrary number of moments of the distribution function. Analogously, the theoretical approach is applied to many one-dimensional n + nn + submicron Si structures by using different band structure models, different doping profiles, different applied biases and is validated by comparing numerical calculations with ensemble Monte Carlo simulations and with available experimental data. Within the quantum MEP we introduce a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is then asserted as fundamental principle of quantum statistical mechanics. Accordingly, we have developed a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theory is formulated both in thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ħ 2 , being ħ the reduced Planck constant. In particular, by using an arbitrary number of moments, we prove that: i) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives both of the
Analogue of Pontryagin's maximum principle for multiple integrals minimization problems
Mikhail, Zelikin
2016-01-01
The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Stable carbides in transition metal alloys
International Nuclear Information System (INIS)
Piotrkowski, R.
1991-01-01
In the present work different techniques were employed for the identification of stable carbides in two sets of transition metal alloys of wide technological application: a set of three high alloy M2 type steels in which W and/or Mo were total or partially replaced by Nb, and a Zr-2.5 Nb alloy. The M2 steel is a high speed steel worldwide used and the Zr-2.5 Nb alloy is the base material for the pressure tubes in the CANDU type nuclear reactors. The stability of carbide was studied in the frame of Goldschmidt's theory of interstitial alloys. The identification of stable carbides in steels was performed by determining their metallic composition with an energy analyzer attached to the scanning electron microscope (SEM). By these means typical carbides of the M2 steel, MC and M 6 C, were found. Moreover, the spatial and size distribution of carbide particles were determined after different heat treatments, and both microstructure and microhardness were correlated with the appearance of the secondary hardening phenomenon. In the Zr-Nb alloy a study of the α and β phases present after different heat treatments was performed with optical and SEM metallographic techniques, with the guide of Abriata and Bolcich phase diagram. The α-β interphase boundaries were characterized as short circuits for diffusion with radiotracer techniques and applying Fisher-Bondy-Martin model. The precipitation of carbides was promoted by heat treatments that produced first the C diffusion into the samples at high temperatures (β phase), and then the precipitation of carbide particles at lower temperature (α phase or (α+β)) two phase field. The precipitated carbides were identified as (Zr, Nb)C 1-x with SEM, electron microprobe and X-ray diffraction techniques. (Author) [es
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
Maximum Profit Configurations of Commercial Engines
Directory of Open Access Journals (Sweden)
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Modelling maximum likelihood estimation of availability
International Nuclear Information System (INIS)
Waller, R.A.; Tietjen, G.L.; Rock, G.W.
1975-01-01
Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)
Organic synthesis with stable isotopes
International Nuclear Information System (INIS)
Daub, G.H.; Kerr, V.N.; Williams, D.L.; Whaley, T.W.
1978-01-01
Some general considerations concerning organic synthesis with stable isotopes are presented. Illustrative examples are described and discussed. The examples include DL-2-amino-3-methyl- 13 C-butanoic-3,4- 13 C 2 acid (DL-valine- 13 C 3 ); methyl oleate-1- 13 C; thymine-2,6- 13 C 2 ; 2-aminoethanesulfonic- 13 C acid (taurine- 13 C); D-glucose-6- 13 C; DL-2-amino-3-methylpentanoic-3,4- 13 C 2 acid (DL-isoleucine- 13 C 2 ); benzidine- 15 N 2 ; and 4-ethylsulfonyl-1-naphthalene-sulfonamide- 15 N
Stable isotopes - separation and application
International Nuclear Information System (INIS)
Lockhart, I.M.
1980-01-01
In this review, methods used for the separation of stable isotopes ( 12 C, 13 C, 14 N, 15 N, 16 O, 17 O, 18 O, 34 S) will be described. The synthesis of labelled compounds, techniques for detection and assay, and areas of application will also be discussed. Particular attention will be paid to the isotopes of carbon, nitrogen, and oxygen; to date, sulphur isotopes have only assumed a minor role. The field of deuterium chemistry is too extensive for adequate treatment; it will therefore be essentially excluded. (author)
Stable agents for imaging investigations
International Nuclear Information System (INIS)
Tofe, A.J.
1976-01-01
This invention concerns highly stable compounds useful in preparing technetium 99m based scintiscanning exploration agents. The compounds of this invention include a pertechnetate reducing agent or a solution of oxidized pertechnetate and an efficient proportion, sufficient to stabilize the compounds in the presence of oxygen and of radiolysis products, of ascorbic acid or a pharmaceutically acceptable salt or ester of this acid. The invention also concerns a perfected process for preparing a technetium based exploration agent, consisting in codissolving the ascorbic acid or a pharmaceutically acceptable salt or ester of such an acid and a pertechnetate reducing agent in a solution of oxidized pertechnetate [fr
A maximum power point tracking for photovoltaic-SPE system using a maximum current controller
Energy Technology Data Exchange (ETDEWEB)
Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)
2003-02-01
Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)
Maximum support resistance with steel arch backfilling
Energy Technology Data Exchange (ETDEWEB)
1983-01-01
A system of backfilling for roadway arch supports to replace timber and debris lagging is described. Produced in West Germany, it is known as the Bullflex system and consists of 23 cm diameter woven textile tubing which is inflated with a pumpable hydraulically-setting filler of the type normally used in mines. The tube is placed between the back of the support units and the rock face and creates an early-stage interlocking effect.
Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong
2013-01-01
In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...
A novel maximum power point tracking method for PV systems using fuzzy cognitive networks (FCN)
Energy Technology Data Exchange (ETDEWEB)
Karlis, A.D. [Electrical Machines Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece); Kottas, T.L.; Boutalis, Y.S. [Automatic Control Systems Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece)
2007-03-15
Maximum power point trackers (MPPTs) play an important role in photovoltaic (PV) power systems because they maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency. This paper presents a novel MPPT method based on fuzzy cognitive networks (FCN). The new method gives a good maximum power operation of any PV array under different conditions such as changing insolation and temperature. The numerical results show the effectiveness of the proposed algorithm. (author)
Maximum power analysis of photovoltaic module in Ramadi city
Energy Technology Data Exchange (ETDEWEB)
Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)
2013-07-01
Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.
Superfast maximum-likelihood reconstruction for quantum tomography
Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon
2017-06-01
Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.
Sequential and Parallel Algorithms for Finding a Maximum Convex Polygon
DEFF Research Database (Denmark)
Fischer, Paul
1997-01-01
This paper investigates the problem where one is given a finite set of n points in the plane each of which is labeled either ?positive? or ?negative?. We consider bounded convex polygons, the vertices of which are positive points and which do not contain any negative point. It is shown how...... such a polygon which is maximal with respect to area can be found in time O(n³ log n). With the same running time one can also find such a polygon which contains a maximum number of positive points. If, in addition, the number of vertices of the polygon is restricted to be at most M, then the running time...... becomes O(M n³ log n). It is also shown how to find a maximum convex polygon which contains a given point in time O(n³ log n). Two parallel algorithms for the basic problem are also presented. The first one runs in time O(n log n) using O(n²) processors, the second one has polylogarithmic time but needs O...
Bootstrap-based Support of HGT Inferred by Maximum Parsimony
Directory of Open Access Journals (Sweden)
Nakhleh Luay
2010-05-01
Full Text Available Abstract Background Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. Results In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. Conclusions We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/, and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.
Bootstrap-based support of HGT inferred by maximum parsimony.
Park, Hyun Jung; Jin, Guohua; Nakhleh, Luay
2010-05-05
Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/), and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.
Mackeprang, Rasmus
2007-01-01
Several extensions to the SM feature heavy long-lived particles with masses of O(10^2-10^3 GeV) and mean lifetimes fulfilling $CT \\geq 10m$. Among such theories are supersymmetric scenarios as well as extra-dimensional models in which the heavy new particles are seen as Kaluza-Klein excitations of the well-known SM particles. Such particles will, from the point of view of a collider experiment be seen as stable. This thesis is concerned with the case where the exotic heavy particles emph{can} be considered stable while traversing the detector. Specifically the case is considered where the particles in question carry the charge of the strong nuclear force, commonly referred to as emph{colour charge}. A simulation kit has been developed using GEANT4. This framework is the current standard in experimental particle physics for the simulation of interactions of particles with matter, and it is used extensively for detector simulation. The simulation describes the interactions of these particles with matter which i...
38 CFR 18.434 - Education setting.
2010-07-01
... not handicapped to the maximum extent appropriate to the needs of the handicapped person. A recipient shall place a handicapped person in the regular educational environment operated by the recipient unless... Adult Education § 18.434 Education setting. (a) Academic setting. A recipient shall educate, or shall...
Maximum Recoverable Gas from Hydrate Bearing Sediments by Depressurization
Terzariol, Marco
2017-11-13
The estimation of gas production rates from hydrate bearing sediments requires complex numerical simulations. This manuscript presents a set of simple and robust analytical solutions to estimate the maximum depressurization-driven recoverable gas. These limiting-equilibrium solutions are established when the dissociation front reaches steady state conditions and ceases to expand further. Analytical solutions show the relevance of (1) relative permeabilities between the hydrate free sediment, the hydrate bearing sediment, and the aquitard layers, and (2) the extent of depressurization in terms of the fluid pressures at the well, at the phase boundary, and in the far field. Close form solutions for the size of the produced zone allow for expeditious financial analyses; results highlight the need for innovative production strategies in order to make hydrate accumulations an economically-viable energy resource. Horizontal directional drilling and multi-wellpoint seafloor dewatering installations may lead to advantageous production strategies in shallow seafloor reservoirs.
LIBOR troubles: Anomalous movements detection based on maximum entropy
Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria
2016-05-01
According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.
Marginal Maximum Likelihood Estimation of Item Response Models in R
Directory of Open Access Journals (Sweden)
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics
DEFF Research Database (Denmark)
Schlaikjer, Malene; Jensen, Jørgen Arendt
2004-01-01
)-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...... has been compared to the cross-correlation (CC) estimator and the previously developed maximum likelihood estimator (MLE). The results show that the CMLE can handle a larger velocity search range and is capable of estimating even low velocity levels from tissue motion. The CC and the MLE produce...... for the CC and the MLE. When the velocity search range is set to twice the limit of the CC and the MLE, the number of incorrect velocity estimates are 0, 19.1, and 7.2% for the CMLE, CC, and MLE, respectively. The ability to handle a larger search range and estimating low velocity levels was confirmed...
The scaling of maximum and basal metabolic rates of mammals and birds
Barbosa, Lauro A.; Garcia, Guilherme J. M.; da Silva, Jafferson K. L.
2006-01-01
Allometric scaling is one of the most pervasive laws in biology. Its origin, however, is still a matter of dispute. Recent studies have established that maximum metabolic rate scales with an exponent larger than that found for basal metabolism. This unpredicted result sets a challenge that can decide which of the concurrent hypotheses is the correct theory. Here, we show that both scaling laws can be deduced from a single network model. Besides the 3/4-law for basal metabolism, the model predicts that maximum metabolic rate scales as M, maximum heart rate as M, and muscular capillary density as M, in agreement with data.
TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS
Energy Technology Data Exchange (ETDEWEB)
Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M
2007-11-12
Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.
Maximum power flux of auroral kilometric radiation
International Nuclear Information System (INIS)
Benson, R.F.; Fainberg, J.
1991-01-01
The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3
Maximum likelihood window for time delay estimation
International Nuclear Information System (INIS)
Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup
2004-01-01
Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.
Tutte sets in graphs I: Maximal tutte sets and D-graphs
Bauer, D.; Broersma, Haitze J.; Morgana, A.; Schmeichel, E.
A well-known formula of Tutte and Berge expresses the size of a maximum matching in a graph $G$ in terms of what is usually called the deficiency of $G$. A subset $X$ of $V(G)$ for which this deficiency is attained is called a Tutte set of $G$. While much is known about maximum matchings, less is
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...
Half-width at half-maximum, full-width at half-maximum analysis
Indian Academy of Sciences (India)
addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.
Stable isotope mass spectrometry in petroleum exploration
International Nuclear Information System (INIS)
Mathur, Manju
1997-01-01
The stable isotope mass spectrometry plays an important role to evaluate the stable isotopic composition of hydrocarbons. The isotopic ratios of certain elements in petroleum samples reflect certain characteristics which are useful for petroleum exploration
ON THE MAXIMUM MASS OF STELLAR BLACK HOLES
International Nuclear Information System (INIS)
Belczynski, Krzysztof; Fryer, Chris L.; Bulik, Tomasz; Ruiter, Ashley; Valsecchi, Francesca; Vink, Jorick S.; Hurley, Jarrod R.
2010-01-01
We present the spectrum of compact object masses: neutron stars and black holes (BHs) that originate from single stars in different environments. In particular, we calculate the dependence of maximum BH mass on metallicity and on some specific wind mass loss rates (e.g., Hurley et al. and Vink et al.). Our calculations show that the highest mass BHs observed in the Galaxy M bh ∼ 15 M sun in the high metallicity environment (Z = Z sun = 0.02) can be explained with stellar models and the wind mass loss rates adopted here. To reach this result we had to set luminous blue variable mass loss rates at the level of ∼10 -4 M sun yr -1 and to employ metallicity-dependent Wolf-Rayet winds. With such winds, calibrated on Galactic BH mass measurements, the maximum BH mass obtained for moderate metallicity (Z = 0.3 Z sun = 0.006) is M bh,max = 30 M sun . This is a rather striking finding as the mass of the most massive known stellar BH is M bh = 23-34 M sun and, in fact, it is located in a small star-forming galaxy with moderate metallicity. We find that in the very low (globular cluster-like) metallicity environment the maximum BH mass can be as high as M bh,max = 80 M sun (Z = 0.01 Z sun = 0.0002). It is interesting to note that X-ray luminosity from Eddington-limited accretion onto an 80 M sun BH is of the order of ∼10 40 erg s -1 and is comparable to luminosities of some known ultra-luminous X-ray sources. We emphasize that our results were obtained for single stars only and that binary interactions may alter these maximum BH masses (e.g., accretion from a close companion). This is strictly a proof-of-principle study which demonstrates that stellar models can naturally explain even the most massive known stellar BHs.
Tail Risk Constraints and Maximum Entropy
Directory of Open Access Journals (Sweden)
Donald Geman
2015-06-01
Full Text Available Portfolio selection in the financial literature has essentially been analyzed under two central assumptions: full knowledge of the joint probability distribution of the returns of the securities that will comprise the target portfolio; and investors’ preferences are expressed through a utility function. In the real world, operators build portfolios under risk constraints which are expressed both by their clients and regulators and which bear on the maximal loss that may be generated over a given time period at a given confidence level (the so-called Value at Risk of the position. Interestingly, in the finance literature, a serious discussion of how much or little is known from a probabilistic standpoint about the multi-dimensional density of the assets’ returns seems to be of limited relevance. Our approach in contrast is to highlight these issues and then adopt throughout a framework of entropy maximization to represent the real world ignorance of the “true” probability distributions, both univariate and multivariate, of traded securities’ returns. In this setting, we identify the optimal portfolio under a number of downside risk constraints. Two interesting results are exhibited: (i the left- tail constraints are sufficiently powerful to override all other considerations in the conventional theory; (ii the “barbell portfolio” (maximal certainty/ low risk in one set of holdings, maximal uncertainty in another, which is quite familiar to traders, naturally emerges in our construction.
International Nuclear Information System (INIS)
Peřinová, Vlasta; Lukš, Antonín
2015-01-01
The SU(2) group is used in two different fields of quantum optics, the quantum polarization and quantum interferometry. Quantum degrees of polarization may be based on distances of a polarization state from the set of unpolarized states. The maximum polarization is achieved in the case where the state is pure and then the distribution of the photon-number sums is optimized. In quantum interferometry, the SU(2) intelligent states have also the property that the Fisher measure of information is equal to the inverse minimum detectable phase shift on the usual simplifying condition. Previously, the optimization of the Fisher information under a constraint was studied. Now, in the framework of constraint optimization, states similar to the SU(2) intelligent states are treated. (paper)
A maximum likelihood framework for protein design
Directory of Open Access Journals (Sweden)
Philippe Hervé
2006-06-01
Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces
Stable rotating dipole solitons in nonlocal media
DEFF Research Database (Denmark)
Lopez-Aguayo, Servando; Skupin, Stefan; Desyatnikov, Anton S.
2006-01-01
We present the first example of stable rotating two-soliton bound states in nonlinear optical media with nonlocal response. We show that, in contrast to media with local response, nonlocality opens possibilities to generate stable azimuthons.......We present the first example of stable rotating two-soliton bound states in nonlinear optical media with nonlocal response. We show that, in contrast to media with local response, nonlocality opens possibilities to generate stable azimuthons....
Tempered stable laws as random walk limits
Chakrabarty, Arijit; Meerschaert, Mark M.
2010-01-01
Stable laws can be tempered by modifying the L\\'evy measure to cool the probability of large jumps. Tempered stable laws retain their signature power law behavior at infinity, and infinite divisibility. This paper develops random walk models that converge to a tempered stable law under a triangular array scheme. Since tempered stable laws and processes are useful in statistical physics, these random walk models can provide a basic physical model for the underlying physical phenomena.
Stable States of Biological Organisms
Yukalov, V. I.; Sornette, D.; Yukalova, E. P.; Henry, J.-Y.; Cobb, J. P.
2009-04-01
A novel model of biological organisms is advanced, treating an organism as a self-consistent system subject to a pathogen flux. The principal novelty of the model is that it describes not some parts, but a biological organism as a whole. The organism is modeled by a five-dimensional dynamical system. The organism homeostasis is described by the evolution equations for five interacting components: healthy cells, ill cells, innate immune cells, specific immune cells, and pathogens. The stability analysis demonstrates that, in a wide domain of the parameter space, the system exhibits robust structural stability. There always exist four stable stationary solutions characterizing four qualitatively differing states of the organism: alive state, boundary state, critical state, and dead state.
Periodicity of the stable isotopes
Boeyens, J C A
2003-01-01
It is demonstrated that all stable (non-radioactive) isotopes are formally interrelated as the products of systematically adding alpha particles to four elementary units. The region of stability against radioactive decay is shown to obey a general trend based on number theory and contains the periodic law of the elements as a special case. This general law restricts the number of what may be considered as natural elements to 100 and is based on a proton:neutron ratio that matches the golden ratio, characteristic of biological and crystal growth structures. Different forms of the periodic table inferred at other proton:neutron ratios indicate that the electronic configuration of atoms is variable and may be a function of environmental pressure. Cosmic consequences of this postulate are examined. (author)
Stable massive particles at colliders
Energy Technology Data Exchange (ETDEWEB)
Fairbairn, M.; /Stockholm U.; Kraan, A.C.; /Pennsylvania U.; Milstead, D.A.; /Stockholm U.; Sjostrand, T.; /Lund U.; Skands, P.; /Fermilab; Sloan, T.; /Lancaster U.
2006-11-01
We review the theoretical motivations and experimental status of searches for stable massive particles (SMPs) which could be sufficiently long-lived as to be directly detected at collider experiments. The discovery of such particles would address a number of important questions in modern physics including the origin and composition of dark matter in the universe and the unification of the fundamental forces. This review describes the techniques used in SMP-searches at collider experiments and the limits so far obtained on the production of SMPs which possess various colour, electric and magnetic charge quantum numbers. We also describe theoretical scenarios which predict SMPs, the phenomenology needed to model their production at colliders and interactions with matter. In addition, the interplay between collider searches and open questions in cosmology such as dark matter composition are addressed.
Two-Agent Scheduling to Minimize the Maximum Cost with Position-Dependent Jobs
Directory of Open Access Journals (Sweden)
Long Wan
2015-01-01
Full Text Available This paper investigates a single-machine two-agent scheduling problem to minimize the maximum costs with position-dependent jobs. There are two agents, each with a set of independent jobs, competing to perform their jobs on a common machine. In our scheduling setting, the actual position-dependent processing time of one job is characterized by variable function dependent on the position of the job in the sequence. Each agent wants to fulfil the objective of minimizing the maximum cost of its own jobs. We develop a feasible method to achieve all the Pareto optimal points in polynomial time.
Effectiveness and risks of stable iodine prophylaxis
International Nuclear Information System (INIS)
Waight, P.J.
1995-01-01
The factors upon which the efficacy of stable iodine prophylaxis depends are reviewed, with particular reference to the dose of stable iodine, the timing of the dose, the influence of dietary iodine and the impact of the other prospective actions. The risks of stable iodine ingestion are estimated, and their application to the principle of Justification in outlined. (Author)
Temperature and Humidity Control in Livestock Stables
DEFF Research Database (Denmark)
Hansen, Michael; Andersen, Palle; Nielsen, Kirsten M.
2010-01-01
The paper describes temperature and humidity control of a livestock stable. It is important to have a correct air flow pattern in the livestock stable in order to achieve proper temperature and humidity control as well as to avoid draught. In the investigated livestock stable the air flow...
Institute of Scientific and Technical Information of China (English)
Esmaeil Ghaderi; Hossein Tohidi; Behnam Khosrozadeh
2017-01-01
The present study was carried out in order to track the maximum power point in a variable speed turbine by minimizing electromechanical torque changes using a sliding mode control strategy.In this strategy,fhst,the rotor speed is set at an optimal point for different wind speeds.As a result of which,the tip speed ratio reaches an optimal point,mechanical power coefficient is maximized,and wind turbine produces its maximum power and mechanical torque.Then,the maximum mechanical torque is tracked using electromechanical torque.In this technique,tracking error integral of maximum mechanical torque,the error,and the derivative of error are used as state variables.During changes in wind speed,sliding mode control is designed to absorb the maximum energy from the wind and minimize the response time of maximum power point tracking (MPPT).In this method,the actual control input signal is formed from a second order integral operation of the original sliding mode control input signal.The result of the second order integral in this model includes control signal integrity,full chattering attenuation,and prevention from large fluctuations in the power generator output.The simulation results,calculated by using MATLAB/m-file software,have shown the effectiveness of the proposed control strategy for wind energy systems based on the permanent magnet synchronous generator (PMSG).
Variation of Probable Maximum Precipitation in Brazos River Basin, TX
Bhatia, N.; Singh, V. P.
2017-12-01
The Brazos River basin, the second-largest river basin by area in Texas, generates the highest amount of flow volume of any river in a given year in Texas. With its headwaters located at the confluence of Double Mountain and Salt forks in Stonewall County, the third-longest flowline of the Brazos River traverses within narrow valleys in the area of rolling topography of west Texas, and flows through rugged terrains in mainly featureless plains of central Texas, before its confluence with Gulf of Mexico. Along its major flow network, the river basin covers six different climate regions characterized on the basis of similar attributes of vegetation, temperature, humidity, rainfall, and seasonal weather changes, by National Oceanic and Atmospheric Administration (NOAA). Our previous research on Texas climatology illustrated intensified precipitation regimes, which tend to result in extreme flood events. Such events have caused huge losses of lives and infrastructure in the Brazos River basin. Therefore, a region-specific investigation is required for analyzing precipitation regimes along the geographically-diverse river network. Owing to the topographical and hydroclimatological variations along the flow network, 24-hour Probable Maximum Precipitation (PMP) was estimated for different hydrologic units along the river network, using the revised Hershfield's method devised by Lan et al. (2017). The method incorporates the use of a standardized variable describing the maximum deviation from the average of a sample scaled by the standard deviation of the sample. The hydrometeorological literature identifies this method as more reasonable and consistent with the frequency equation. With respect to the calculation of stable data size required for statistically reliable results, this study also quantified the respective uncertainty associated with PMP values in different hydrologic units. The corresponding range of return periods of PMPs in different hydrologic units was
Maximum entropy production rate in quantum thermodynamics
Energy Technology Data Exchange (ETDEWEB)
Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)
2010-06-01
In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible
Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier
2010-05-01
PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.
Multi-stable perception balances stability and sensitivity
Directory of Open Access Journals (Sweden)
Alexander ePastukhov
2013-03-01
Full Text Available We report that multi-stable perception operates in a consistent, dynamical regime, balancing the conflicting goals of stability and sensitivity. When a multi-stable visual display is viewed continuously, its phenomenal appearance reverses spontaneously at irregular intervals. We characterized the perceptual dynamics of individual observers in terms of four statistical measures: the distribution of dominance times (mean and variance and the novel, subtle dependence on prior history (correlation and time-constant.The dynamics of multi-stable perception is known to reflect several stabilizing and destabilizing factors. Phenomenologically, its main aspects are captured by a simplistic computational model with competition, adaptation, and noise. We identified small parameter volumes (~3% of the possible volume in which the model reproduced both dominance distribution and history-dependence of each observer. For 21 of 24 data sets, the identified volumes clustered tightly (~15% of the possible volume, revealing a consistent `operating regime' of multi-stable perception. The `operating regime' turned out to be marginally stable or, equivalently, near the brink of an oscillatory instability. The chance probability of the observed clustering was <0.02.To understand the functional significance of this empirical `operating regime', we compared it to the theoretical `sweet spot' of the model. We computed this `sweet spot' as the intersection of the parameter volumes in which the model produced stable perceptual outcomes and in which it was sensitive to input modulations. Remarkably, the empirical `operating regime' proved to be largely coextensive with the theoretical `sweet spot'. This demonstrated that perceptual dynamics was not merely consistent but also functionally optimized (in that it balances stability with sensitivity. Our results imply that multi-stable perception is not a laboratory curiosity, but reflects a functional optimization of perceptual
Determination of the maximum-depth to potential field sources by a maximum structural index method
Fedi, M.; Florio, G.
2013-01-01
A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.
Unit of stable isotopic N15 analysis
International Nuclear Information System (INIS)
Cabrera de Bisbal, Evelin; Paredes U, Maria
1997-01-01
The continuous and growing demand of crops and cattle for the domestic inhabitants, forces the search of technical solutions in agriculture. One of the solutions able to be covered in a near future it is the escalation of agricultural production in lands already being cultivated, either by means of an intensification of cultivation and / or increasing the unitary yields. In the intensive cropping systems, the crops extract substantial quantities of nutriments that is recovered by means of the application of fertilizers. Due to the lack of resources and to the increase of commercial inputs prices, it has been necessary to pay attention to the analysis and improvement of low inputs cropping systems and to the effective use of resources. Everything has made to establish a concept of plant nutrition focused system, which integrate the sources of nutriments for plants and the production factors of crops in a productive cropping system, to improve the fertility of soils, the agricultural productivity and profitability. This system includes the biggest efficiency of chemical fertilizers as the maximum profit of alternative sources of nutriments, such as organic fertilizers, citrate-phosphate rocks and biological nitrogen fixation. By means of field experiments under different environmental conditions (soils and climate) it can be determined the best combination of fertilizers practice (dose, placement, opportunity and source) for selected cropping systems. The experimentation with fertilizer, marked with stable and radioactive isotopes, provides a direct and express method to obtain conclusive answers to the questions: where, when and how should be applied. The fertilizers marked with N 1 5 have been used to understand the application of marked fertilizer to the cultivations, and the determination of the proportion of crops nutritious element derived from fertilizer. The isotopic techniques offer a fast and reliable mean to obtain information about the distribution of
LBA-ECO CD-02 Carbon, Nitrogen, Oxygen Stable Isotopes in Organic Material, Brazil
National Aeronautics and Space Administration — This data set reports the measurement of stable carbon, nitrogen, and oxygen isotope ratios in organic material (plant, litter and soil samples) in forest canopy...
LBA-ECO CD-02 Carbon, Nitrogen, Oxygen Stable Isotopes in Organic Material, Brazil
National Aeronautics and Space Administration — ABSTRACT: This data set reports the measurement of stable carbon, nitrogen, and oxygen isotope ratios in organic material (plant, litter and soil samples) in forest...
BASIN TCP Stable Isotope Composition of CO2 in Terrestrial Ecosystems
National Aeronautics and Space Administration — This data set reports stable isotope ratio data of CO2 (13C/12C and 18O/16O) associated with photosynthetic and respiratory exchanges across the biosphere-atmosphere...
BASIN TCP Stable Isotope Composition of CO2 in Terrestrial Ecosystems
National Aeronautics and Space Administration — ABSTRACT: This data set reports stable isotope ratio data of CO2 (13C/12C and 18O/16O) associated with photosynthetic and respiratory exchanges across the...
Alaska Northern Fur Seal Foraging Habitat Model Stable Isotope Data, 2006-2008
National Oceanic and Atmospheric Administration, Department of Commerce — These data sets were used by Zeppelin et al. (2015) to model northern fur seal foraging habitats based on stable isotope values measured in plasma and red blood...
Moltex Energy's stable salt reactors
International Nuclear Information System (INIS)
O'Sullivan, R.; Laurie, J.
2016-01-01
A stable salt reactor is a molten salt reactor in which the molten fuel salt is contained in fuel rods. This concept was invented in 1951 and re-discovered and improved recently by Moltex Energy Company. The main advantage of using molten salt fuel is that the 2 problematic fission products cesium and iodine do not exist in gaseous form but rather in a form of a salt that present no danger in case of accident. Another advantage is the strongly negative temperature coefficient for reactivity which means the reactor self-regulates. The feasibility studies have been performed on a molten salt fuel composed of sodium chloride and plutonium/uranium/lanthanide/actinide trichloride. The coolant fluid is a mix of sodium and zirconium fluoride salts that will need low flow rates. The addition of 1 mol% of metal zirconium to the coolant fluid reduces the risk of corrosion with standard steels and the addition of 2% of hafnium reduces the neutron dose. The temperature of the coolant is expected to reach 650 Celsius degrees at the exit of the core. This reactor is designed to be modular and it will be able to burn actinides. (A.C.)
Stable piecewise polynomial vector fields
Directory of Open Access Journals (Sweden)
Claudio Pessoa
2012-09-01
Full Text Available Let $N={y>0}$ and $S={y<0}$ be the semi-planes of $mathbb{R}^2$ having as common boundary the line $D={y=0}$. Let $X$ and $Y$ be polynomial vector fields defined in $N$ and $S$, respectively, leading to a discontinuous piecewise polynomial vector field $Z=(X,Y$. This work pursues the stability and the transition analysis of solutions of $Z$ between $N$ and $S$, started by Filippov (1988 and Kozlova (1984 and reformulated by Sotomayor-Teixeira (1995 in terms of the regularization method. This method consists in analyzing a one parameter family of continuous vector fields $Z_{epsilon}$, defined by averaging $X$ and $Y$. This family approaches $Z$ when the parameter goes to zero. The results of Sotomayor-Teixeira and Sotomayor-Machado (2002 providing conditions on $(X,Y$ for the regularized vector fields to be structurally stable on planar compact connected regions are extended to discontinuous piecewise polynomial vector fields on $mathbb{R}^2$. Pertinent genericity results for vector fields satisfying the above stability conditions are also extended to the present case. A procedure for the study of discontinuous piecewise vector fields at infinity through a compactification is proposed here.
Stable Structures for Distributed Applications
Directory of Open Access Journals (Sweden)
Eugen DUMITRASCU
2008-01-01
Full Text Available For distributed applications, we define the linear, tree and graph structure types with different variants and modalities to aggregate them. The distributed applications have assigned structures that through their characteristics influence the costs of stages for developing cycle and the costs for exploitation, transferred to each user. We also present the quality characteristics of a structure for a stable application, which is focused on stability characteristic. For that characteristic we define the estimated measure indicators for a level. The influence of the factors of stability and the ways for increasing it are thus identified, and at the same time the costs of development stages, the costs of usage and the costs of maintenance to be keep on between limits that assure the global efficiency of application. It is presented the base aspects for distributed applications: definition, peculiarities and importance. The aspects for the development cycle of distributed application are detailed. In this article, we alongside give the mechanisms for building the defined structures and analyze the complexity of the defined structures for a distributed application of a virtual store.
Castrillon, Julio; Genton, Marc G.; Yokota, Rio
2015-01-01
We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic
Stable solutions of nonlocal electron heat transport equations
International Nuclear Information System (INIS)
Prasad, M.K.; Kershaw, D.S.
1991-01-01
Electron heat transport equations with a nonlocal heat flux are in general ill-posed and intrinsically unstable, as proved by the present authors [Phys. Fluids B 1, 2430 (1989)]. A straightforward numerical solution of these equations will therefore lead to absurd results. It is shown here that by imposing a minimal set of constraints on the problem it is possible to arrive at a globally stable, consistent, and energy conserving numerical solution
Direct search for pair production of heavy stable charged particles in Z decays
International Nuclear Information System (INIS)
Soderstrom, E.; McKenna, J.A.; Abrams, G.S.; Adolphsen, C.E.; Averill, D.; Ballam, J.; Barish, B.C.; Barklow, T.; Barnett, B.A.; Bartelt, J.; Bethke, S.; Blockus, D.; Bonvicini, G.; Boyarski, A.; Brabson, B.; Breakstone, A.; Bulos, F.; Burchat, P.R.; Burke, D.L.; Cence, R.J.; Chapman, J.; Chmeissani, M.; Cords, D.; Coupal, D.P.; Dauncey, P.; DeStaebler, H.C.; Dorfan, D.E.; Dorfan, J.M.; Drewer, D.C.; Elia, R.; Feldman, G.J.; Fernandes, D.; Field, R.C.; Ford, W.T.; Fordham, C.; Frey, R.; Fujino, D.; Gan, K.K.; Gero, E.; Gidal, G.; Glanzman, T.; Goldhaber, G.; Gomez Cadenas, J.J.; Gratta, G.; Grindhammer, G.; Grosse-Wiesmann, P.; Hanson, G.; Harr, R.; Harral, B.; Harris, F.A.; Hawkes, C.M.; Hayes, K.; Hearty, C.; Heusch, C.A.; Hildreth, M.D.; Himel, T.; Hinshaw, D.A.; Hong, S.J.; Hutchinson, D.; Hylen, J.; Innes, W.R.; Jacobsen, R.G.; Jaros, J.A.; Jung, C.K.; Kadyk, J.A.; Kent, J.; King, M.; Koetke, D.S.; Komamiya, S.; Koska, W.; Kowalski, L.A.; Kozanecki, W.; Kral, J.F.; Kuhlen, M.; Labarga, L.; Lankford, A.J.; Larsen, R.R.; Le Diberder, F.; Levi, M.E.; Litke, A.M.; Lou, X.C.; Lueth, V.; Matthews, J.A.J.; Mattison, T.; Milliken, B.D.; Moffeit, K.C.; Munger, C.T.; Murray, W.N.; Nash, J.; Ogren, H.; O'Shaughnessy, K.F.; Parker, S.I.; Peck, C.; Perl, M.L.; Petradza, M.; Pitthan, R.; Porter, F.C.; Rankin, P.; Riles, K.; Rouse, F.R.; Rust, D.R.; Sadrozinski, H.F.W.; Schaad, M.W.; Schumm, B.A.; Seiden, A.; Smith, J.G.; Snyder, A.; Stoker, D.P.; Stroynowski, R.; Swartz, M.; Thun, R.; Trilling, G.H.; Van Kooten, R.; Voruganti, P.; Wagner, S.R.; Watson, S.; Weber, P.; Weinstein, A.J.; Weir, A.J.; Wicklund, E.; Woods, M.; Wu, D.Y.; Yurko, M.; Zaccardelli, C.; von Zanthie, C.
1990-01-01
A search for pair production of stable charged particles from Z decay has been performed with the Mark II detector at the SLAC Linear Collider. Particle masses are determined from momentum, ionization energy loss, and time-of-flight measurements. A limit excluding pair production of stable fourth-generation charged leptons and stable mirror fermions with masses between the muon mass and 36.3 GeV/c 2 is set at the 95% confidence level. Pair production of stable supersymmetric scalar leptons with masses between the muon mass and 32.6 GeV/c 2 is also excluded
Accurate modeling and maximum power point detection of ...
African Journals Online (AJOL)
Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.
Maximum power per VA control of vector controlled interior ...
Indian Academy of Sciences (India)
Thakur Sumeet Singh
2018-04-11
Apr 11, 2018 ... Department of Electrical Engineering, Indian Institute of Technology Delhi, New ... The MPVA operation allows maximum-utilization of the drive-system. ... Permanent magnet motor; unity power factor; maximum VA utilization; ...
Electron density distribution in Si and Ge using multipole, maximum ...
Indian Academy of Sciences (India)
Si and Ge has been studied using multipole, maximum entropy method (MEM) and ... and electron density distribution using the currently available versatile ..... data should be subjected to maximum possible utility for the characterization of.
The Emotional Climate of the Interpersonal Classroom in a Maximum Security Prison for Males.
Meussling, Vonne
1984-01-01
Examines the nature, the task, and the impact of teaching in a maximum security prison for males. Data are presented concerning the curriculum design used in order to create a nonevaluative atmosphere. Inmates' reactions to self-disclosure and open communication in a prison setting are evaluated. (CT)
Optimal control problems with delay, the maximum principle and necessary conditions
Frankena, J.F.
1975-01-01
In this paper we consider a rather general optimal control problem involving ordinary differential equations with delayed arguments and a set of equality and inequality restrictions on state- and control variables. For this problem a maximum principle is given in pointwise form, using variational
Advanced thermally stable jet fuels
Energy Technology Data Exchange (ETDEWEB)
Schobert, H.H.
1999-01-31
The Pennsylvania State University program in advanced thermally stable coal-based jet fuels has five broad objectives: (1) Development of mechanisms of degradation and solids formation; (2) Quantitative measurement of growth of sub-micrometer and micrometer-sized particles suspended in fuels during thermal stressing; (3) Characterization of carbonaceous deposits by various instrumental and microscopic methods; (4) Elucidation of the role of additives in retarding the formation of carbonaceous solids; (5) Assessment of the potential of production of high yields of cycloalkanes by direct liquefaction of coal. Future high-Mach aircraft will place severe thermal demands on jet fuels, requiring the development of novel, hybrid fuel mixtures capable of withstanding temperatures in the range of 400--500 C. In the new aircraft, jet fuel will serve as both an energy source and a heat sink for cooling the airframe, engine, and system components. The ultimate development of such advanced fuels requires a thorough understanding of the thermal decomposition behavior of jet fuels under supercritical conditions. Considering that jet fuels consist of hundreds of compounds, this task must begin with a study of the thermal degradation behavior of select model compounds under supercritical conditions. The research performed by The Pennsylvania State University was focused on five major tasks that reflect the objectives stated above: Task 1: Investigation of the Quantitative Degradation of Fuels; Task 2: Investigation of Incipient Deposition; Task 3: Characterization of Solid Gums, Sediments, and Carbonaceous Deposits; Task 4: Coal-Based Fuel Stabilization Studies; and Task 5: Exploratory Studies on the Direct Conversion of Coal to High Quality Jet Fuels. The major findings of each of these tasks are presented in this executive summary. A description of the sub-tasks performed under each of these tasks and the findings of those studies are provided in the remainder of this volume
LHC Report: Towards stable beams and collisions
CERN Bulletin
2011-01-01
Over the past two weeks, the LHC re-commissioning with beam has continued at a brisk pace. The first collisions of 2011 were produced on 2 March, with stable beams and collisions for physics planned for the coming days. Low intensity beams with just a few bunches of particles were used to test the energy ramp to 3.5 TeV and the squeeze. The results were successful and, as a by-product, the first collisions of 2011 were recorded 2 March. One of the main activities carried out by the operation teams has been the careful set-up of the collimation system, and the injection and beam dump protection devices. The collimation system provides essential beam cleaning, preventing stray particles from impacting other elements of the machine, particularly the superconducting magnets. In addition to the collimation system, also the injection and beam dump protection devices perform a vital machine protection role, as they detect any beam that might be mis-directed during rare, but not totally unavoidable, hardware hiccups...
ROBUST MPC FOR STABLE LINEAR SYSTEMS
Directory of Open Access Journals (Sweden)
M.A. Rodrigues
2002-03-01
Full Text Available In this paper, a new model predictive controller (MPC, which is robust for a class of model uncertainties, is developed. Systems with stable dynamics and time-invariant model uncertainty are treated. The development herein proposed is focused on real industrial systems where the controller is part of an on-line optimization scheme and works in the output-tracking mode. In addition, the system has a time-varying number of degrees of freedom since some of the manipulated inputs may become constrained. Moreover, the number of controlled outputs may also vary during system operation. Consequently, the actual system may show operating conditions with a number of controlled outputs larger than the number of available manipulated inputs. The proposed controller uses a state-space model, which is aimed at the representation of the output-predicted trajectory. Based on this model, a cost function is proposed whereby the output error is integrated along an infinite prediction horizon. It is considered the case of multiple operating points, where the controller stabilizes a set of models corresponding to different operating conditions for the system. It is shown that closed-loop stability is guaranteed by the feasibility of a linear matrix optimization problem.
Canonical, stable, general mapping using context schemes.
Novak, Adam M; Rosen, Yohei; Haussler, David; Paten, Benedict
2015-11-15
Sequence mapping is the cornerstone of modern genomics. However, most existing sequence mapping algorithms are insufficiently general. We introduce context schemes: a method that allows the unambiguous recognition of a reference base in a query sequence by testing the query for substrings from an algorithmically defined set. Context schemes only map when there is a unique best mapping, and define this criterion uniformly for all reference bases. Mappings under context schemes can also be made stable, so that extension of the query string (e.g. by increasing read length) will not alter the mapping of previously mapped positions. Context schemes are general in several senses. They natively support the detection of arbitrary complex, novel rearrangements relative to the reference. They can scale over orders of magnitude in query sequence length. Finally, they are trivially extensible to more complex reference structures, such as graphs, that incorporate additional variation. We demonstrate empirically the existence of high-performance context schemes, and present efficient context scheme mapping algorithms. The software test framework created for this study is available from https://registry.hub.docker.com/u/adamnovak/sequence-graphs/. anovak@soe.ucsc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
40 CFR 141.13 - Maximum contaminant levels for turbidity.
2010-07-01
... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...
Maximum Power Training and Plyometrics for Cross-Country Running.
Ebben, William P.
2001-01-01
Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…
13 CFR 107.840 - Maximum term of Financing.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...
7 CFR 3565.210 - Maximum interest rate.
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...
Novel maximum-margin training algorithms for supervised neural networks.
Ludwig, Oswaldo; Nunes, Urbano
2010-06-01
This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by
UpSet: Visualization of Intersecting Sets
Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter
2016-01-01
Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912
Development of a microprocessor-controlled coulometric system for stable ph control
Bergveld, Piet; van der Schoot, B.H.
1983-01-01
The coulometric pH control system utilizes a programmable coulostat for controlling the pH of a certain volume of unbuffered solution. Based on theoretical considerations, conditions are established which guarantee stable operation with maximum suppression of disturbances from the dissolution of
Doppler wind lidar using a MOPA semiconductor laser at stable single-frequency operation
DEFF Research Database (Denmark)
Rodrigo, Peter John; Pedersen, Christian
2009-01-01
for the tapered amplifier section. The specified maximum current values are 0.7 A and 4.0 A for Idfb and Iamp. Although the MOPA-SL has been proven capable of producing single-frequency CW output beam, stable operation at this spectral condition has also been known to highly depend on the drive currents...
Population Games, Stable Games, and Passivity
Directory of Open Access Journals (Sweden)
Michael J. Fox
2013-10-01
Full Text Available The class of “stable games”, introduced by Hofbauer and Sandholm in 2009, has the attractive property of admitting global convergence to equilibria under many evolutionary dynamics. We show that stable games can be identified as a special case of the feedback-system-theoretic notion of a “passive” dynamical system. Motivated by this observation, we develop a notion of passivity for evolutionary dynamics that complements the definition of the class of stable games. Since interconnections of passive dynamical systems exhibit stable behavior, we can make conclusions about passive evolutionary dynamics coupled with stable games. We show how established evolutionary dynamics qualify as passive dynamical systems. Moreover, we exploit the flexibility of the definition of passive dynamical systems to analyze generalizations of stable games and evolutionary dynamics that include forecasting heuristics as well as certain games with memory.
Stable isotope tracers and exercise physiology: past, present and future.
Wilkinson, Daniel J; Brook, Matthew S; Smith, Kenneth; Atherton, Philip J
2017-05-01
Stable isotope tracers have been invaluable assets in physiological research for over 80 years. The application of substrate-specific stable isotope tracers has permitted exquisite insight into amino acid, fatty-acid and carbohydrate metabolic regulation (i.e. incorporation, flux, and oxidation, in a tissue-specific and whole-body fashion) in health, disease and response to acute and chronic exercise. Yet, despite many breakthroughs, there are limitations to 'substrate-specific' stable isotope tracers, which limit physiological insight, e.g. the need for intravenous infusions and restriction to short-term studies (hours) in controlled laboratory settings. In recent years significant interest has developed in alternative stable isotope tracer techniques that overcome these limitations, in particular deuterium oxide (D 2 O or heavy water). The unique properties of this tracer mean that through oral administration, the turnover and flux through a number of different substrates (muscle proteins, lipids, glucose, DNA (satellite cells)) can be monitored simultaneously and flexibly (hours/weeks/months) without the need for restrictive experimental control. This makes it uniquely suited for the study of 'real world' human exercise physiology (amongst many other applications). Moreover, using D 2 O permits evaluation of turnover of plasma and muscle proteins (e.g. dynamic proteomics) in addition to metabolomics (e.g. fluxomics) to seek molecular underpinnings, e.g. of exercise adaptation. Here, we provide insight into the role of stable isotope tracers, from substrate-specific to novel D 2 O approaches, in facilitating our understanding of metabolism. Further novel potential applications of stable isotope tracers are also discussed in the context of integration with the snowballing field of 'omic' technologies. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Li, Sui-Xian
2018-05-07
Previous research has shown that the effectiveness of selecting filter sets from among a large set of commercial broadband filters by a vector analysis method based on maximum linear independence (MLI). However, the traditional MLI approach is suboptimal due to the need to predefine the first filter of the selected filter set to be the maximum ℓ₂ norm among all available filters. An exhaustive imaging simulation with every single filter serving as the first filter is conducted to investigate the features of the most competent filter set. From the simulation, the characteristics of the most competent filter set are discovered. Besides minimization of the condition number, the geometric features of the best-performed filter set comprise a distinct transmittance peak along the wavelength axis of the first filter, a generally uniform distribution for the peaks of the filters and substantial overlaps of the transmittance curves of the adjacent filters. Therefore, the best-performed filter sets can be recognized intuitively by simple vector analysis and just a few experimental verifications. A practical two-step framework for selecting optimal filter set is recommended, which guarantees a significant enhancement of the performance of the systems. This work should be useful for optimizing the spectral sensitivity of broadband multispectral imaging sensors.
Directory of Open Access Journals (Sweden)
Sui-Xian Li
2018-05-01
Full Text Available Previous research has shown that the effectiveness of selecting filter sets from among a large set of commercial broadband filters by a vector analysis method based on maximum linear independence (MLI. However, the traditional MLI approach is suboptimal due to the need to predefine the first filter of the selected filter set to be the maximum ℓ2 norm among all available filters. An exhaustive imaging simulation with every single filter serving as the first filter is conducted to investigate the features of the most competent filter set. From the simulation, the characteristics of the most competent filter set are discovered. Besides minimization of the condition number, the geometric features of the best-performed filter set comprise a distinct transmittance peak along the wavelength axis of the first filter, a generally uniform distribution for the peaks of the filters and substantial overlaps of the transmittance curves of the adjacent filters. Therefore, the best-performed filter sets can be recognized intuitively by simple vector analysis and just a few experimental verifications. A practical two-step framework for selecting optimal filter set is recommended, which guarantees a significant enhancement of the performance of the systems. This work should be useful for optimizing the spectral sensitivity of broadband multispectral imaging sensors.
An improved maximum power point tracking method for a photovoltaic system
Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes
2016-06-01
In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.
2010-07-01
... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...
Maximum Historical Seismic Intensity Map of S. Miguel Island (azores)
Silveira, D.; Gaspar, J. L.; Ferreira, T.; Queiroz, G.
The Azores archipelago is situated in the Atlantic Ocean where the American, African and Eurasian lithospheric plates meet. The so-called Azores Triple Junction located in the area where the Terceira Rift, a NW-SE to WNW-ESE fault system with a dextral component, intersects the Mid-Atlantic Ridge, with an approximate N-S direction, dominates its geological setting. S. Miguel Island is located in the eastern segment of the Terceira Rift, showing a high diversity of volcanic and tectonic structures. It is the largest Azorean island and includes three active trachytic central volcanoes with caldera (Sete Cidades, Fogo and Furnas) placed in the intersection of the NW-SE Ter- ceira Rift regional faults with an E-W deep fault system thought to be a relic of a Mid-Atlantic Ridge transform fault. N-S and NE-SW faults also occur in this con- text. Basaltic cinder cones emplaced along NW-SE fractures link that major volcanic structures. The easternmost part of the island comprises an inactive trachytic central volcano (Povoação) and an old basaltic volcanic complex (Nordeste). Since the settle- ment of the island, early in the XV century, several destructive earthquakes occurred in the Azores region. At least 11 events hit S. Miguel Island with high intensity, some of which caused several deaths and significant damages. The analysis of historical documents allowed reconstructing the history and the impact of all those earthquakes and new intensity maps using the 1998 European Macrosseismic Scale were produced for each event. The data was then integrated in order to obtain the maximum historical seismic intensity map of S. Miguel. This tool is regarded as an important document for hazard assessment and risk mitigation taking in account that indicates the location of dangerous seismogenic zones and provides a comprehensive set of data to be applied in land-use planning, emergency planning and building construction.
Sparse and stable Markowitz portfolios.
Brodie, Joshua; Daubechies, Ingrid; De Mol, Christine; Giannone, Domenico; Loris, Ignace
2009-07-28
We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio.
Approximate maximum likelihood estimation for population genetic inference.
Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas
2017-11-27
In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.
Maximum likelihood pedigree reconstruction using integer linear programming.
Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A
2013-01-01
Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible. © 2012 Wiley Periodicals, Inc.
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
Fuzzy sets, rough sets, multisets and clustering
Dahlbom, Anders; Narukawa, Yasuo
2017-01-01
This book is dedicated to Prof. Sadaaki Miyamoto and presents cutting-edge papers in some of the areas in which he contributed. Bringing together contributions by leading researchers in the field, it concretely addresses clustering, multisets, rough sets and fuzzy sets, as well as their applications in areas such as decision-making. The book is divided in four parts, the first of which focuses on clustering and classification. The second part puts the spotlight on multisets, bags, fuzzy bags and other fuzzy extensions, while the third deals with rough sets. Rounding out the coverage, the last part explores fuzzy sets and decision-making.
The maximum entropy production and maximum Shannon information entropy in enzyme kinetics
Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš
2018-04-01
We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.
Tandberg-Hanssen, E.; Cheng, C. C.; Woodgate, B. E.; Brandt, J. C.; Chapman, R. D.; Athay, R. G.; Beckers, J. M.; Bruner, E. C.; Gurman, J. B.; Hyder, C. L.
1981-01-01
The Ultraviolet Spectrometer and Polarimeter on the Solar Maximum Mission spacecraft is described. It is pointed out that the instrument, which operates in the wavelength range 1150-3600 A, has a spatial resolution of 2-3 arcsec and a spectral resolution of 0.02 A FWHM in second order. A Gregorian telescope, with a focal length of 1.8 m, feeds a 1 m Ebert-Fastie spectrometer. A polarimeter comprising rotating Mg F2 waveplates can be inserted behind the spectrometer entrance slit; it permits all four Stokes parameters to be determined. Among the observing modes are rasters, spectral scans, velocity measurements, and polarimetry. Examples of initial observations made since launch are presented.
Muscular outputs during dynamic bench press under stable versus unstable conditions.
Koshida, Sentaro; Urabe, Yukio; Miyashita, Koji; Iwai, Kanzunori; Kagimori, Aya
2008-09-01
Previous studies have suggested that resistance training exercise under unstable conditions decreases the isometric force output, yet little is known about its influence on muscular outputs during dynamic movement. The objective of this study was to investigate the effect of an unstable condition on power, force, and velocity outputs during the bench press. Twenty male collegiate athletes (mean age, 21.3 +/- 1.5 years; mean height, 167.7 +/- 7.7 cm; mean weight, 75.9 +/- 17.5 kg) participated in this study. Each subject attempted 3 sets of single bench presses with 50% of 1 repetition maximum (1RM) under a stable condition with a flat bench and an unstable condition with a Swiss ball. Acceleration data were obtained with an accelerometer attached to the center of a barbell shaft, and peak outputs of power, force, and velocity were computed. Although significant loss of the peak outputs was found under the unstable condition (p velocity outputs, compared with previous findings. Such small reduction rates of muscular outputs may not compromise the training effect. Prospective studies are necessary to confirm whether the resistance training under an unstable condition permits the improvement of dynamic performance and trunk stability.
Bench Press Upper-Body Muscle Activation Between Stable and Unstable Loads.
Dunnick, Dustin D; Brown, Lee E; Coburn, Jared W; Lynn, Scott K; Barillas, Saldiam R
2015-12-01
The bench press is one of the most commonly used upper-body exercises in training and is performed with many different variations, including unstable loads (ULs). Although there is much research on use of an unstable surface, there is little to none on the use of an UL. The purpose of this study was to investigate muscle activation during the bench press while using a stable load (SL) vs. UL. Twenty resistance-trained men (age = 24.1 ± 2 years; ht = 177.5 ± 5.8 cm; mass = 88.7 ± 13.7 kg) completed 2 experimental conditions (SL and UL) at 2 different intensities (60 and 80% one repetition maximum). Unstable load was achieved by hanging 16 kg kettlebells by elastic bands from the end of the bar. All trial lifts were set to a 2-second cadence with a slight pause at the bottom. Subjects had electrodes attached to 5 muscles (pectoralis major, anterior deltoid, medial deltoid, triceps brachii, and latissimus dorsi) and performed 3 isometric bench press trials to normalize electromyographic data. All 5 muscles demonstrated significantly greater activation at 80% compared with 60% load and during concentric compared with eccentric actions. These results suggest that upper body muscle activation is not different in the bench press between UL and SL. Therefore, coaches should use their preference when designing training programs.
Applications of stable isotopes in clinical pharmacology
Schellekens, Reinout C A; Stellaard, Frans; Woerdenbag, Herman J; Frijlink, Henderik W; Kosterink, Jos G W
2011-01-01
This review aims to present an overview of the application of stable isotope technology in clinical pharmacology. Three main categories of stable isotope technology can be distinguished in clinical pharmacology. Firstly, it is applied in the assessment of drug pharmacology to determine the
Stable isotopes and biomarkers in microbial ecology
Boschker, H.T.S.; Middelburg, J.J.
2002-01-01
The use of biomarkers in combination with stable isotope analysis is a new approach in microbial ecology and a number of papers on a variety of subjects have appeared. We will first discuss the techniques for analysing stable isotopes in biomarkers, primarily gas chromatography-combustion-isotope
Modelling stable atmospheric boundary layers over snow
Sterk, H.A.M.
2015-01-01
Thesis entitled:
Modelling Stable Atmospheric Boundary Layers over Snow
H.A.M. Sterk
Wageningen, 29th of April, 2015
Summary
The emphasis of this thesis is on the understanding and forecasting of the Stable Boundary Layer (SBL) over snow-covered surfaces. SBLs
Gas phase thermal diffusion of stable isotopes
International Nuclear Information System (INIS)
Eck, C.F.
1979-01-01
The separation of stable isotopes at Mound Facility is reviewed from a historical perspective. The historical development of thermal diffusion from a laboratory process to a separation facility that handles all the noble gases is described. In addition, elementary thermal diffusion theory and elementary cascade theory are presented along with a brief review of the uses of stable isotopes
An algebraic method for constructing stable and consistent autoregressive filters
International Nuclear Information System (INIS)
Harlim, John; Hong, Hoon; Robbins, Jacob L.
2015-01-01
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides a discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern
Structure of acid-stable carmine.
Sugimoto, Naoki; Kawasaki, Yoko; Sato, Kyoko; Aoki, Hiromitsu; Ichi, Takahito; Koda, Takatoshi; Yamazaki, Takeshi; Maitani, Tamio
2002-02-01
Acid-stable carmine has recently been distributed in the U.S. market because of its good acid stability, but it is not permitted in Japan. We analyzed and determined the structure of the major pigment in acid-stable carmine, in order to establish an analytical method for it. Carminic acid was transformed into a different type of pigment, named acid-stable carmine, through amination when heated in ammonia solution. The features of the structure were clarified using a model compound, purpurin, in which the orientation of hydroxyl groups on the A ring of the anthraquinone skeleton is the same as that of carminic acid. By spectroscopic means and the synthesis of acid-stable carmine and purpurin derivatives, the structure of the major pigment in acid-stable carmine was established as 4-aminocarminic acid, a novel compound.
Energy Technology Data Exchange (ETDEWEB)
Garrigos, Ausias; Blanes, Jose M.; Carrasco, Jose A. [Area de Tecnologia Electronica, Universidad Miguel Hernandez de Elche, Avda. de la Universidad s/n, 03202 Elche, Alicante (Spain); Ejea, Juan B. [Departamento de Ingenieria Electronica, Universidad de Valencia, Avda. Dr Moliner 50, 46100 Valencia, Valencia (Spain)
2007-05-15
In this paper, an approximate curve fitting method for photovoltaic modules is presented. The operation is based on solving a simple solar cell electrical model by a microcontroller in real time. Only four voltage and current coordinates are needed to obtain the solar module parameters and set its operation at maximum power in any conditions of illumination and temperature. Despite its simplicity, this method is suitable for low cost real time applications, as control loop reference generator in photovoltaic maximum power point circuits. The theory that supports the estimator together with simulations and experimental results are presented. (author)
International Nuclear Information System (INIS)
Hutchinson, Thomas H.; Boegi, Christian; Winter, Matthew J.; Owens, J. Willie
2009-01-01
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic organisms and the
Predecessor queries in dynamic integer sets
DEFF Research Database (Denmark)
Brodal, Gerth Stølting
1997-01-01
We consider the problem of maintaining a set of n integers in the range 0.2w–1 under the operations of insertion, deletion, predecessor queries, minimum queries and maximum queries on a unit cost RAM with word size w bits. Let f (n) be an arbitrary nondecreasing smooth function satisfying n...
Measuring conflict and power in strategic settings
Giovanni Rossi
2009-01-01
This is a quantitative approach to measuring conflict and power in strategic settings: noncooperative games (with cardinal or ordinal utilities) and blockings (without any preference specification). A (0, 1)-ranged index is provided, taking its minimum on common interest games, and its maximum on a newly introduced class termed “full conflict” games.
Microprocessor Controlled Maximum Power Point Tracker for Photovoltaic Application
International Nuclear Information System (INIS)
Jiya, J. D.; Tahirou, G.
2002-01-01
This paper presents a microprocessor controlled maximum power point tracker for photovoltaic module. Input current and voltage are measured and multiplied within the microprocessor, which contains an algorithm to seek the maximum power point. The duly cycle of the DC-DC converter, at which the maximum power occurs is obtained, noted and adjusted. The microprocessor constantly seeks for improvement of obtained power by varying the duty cycle
Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir
2011-01-01
Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353
High efficiency and stable white OLED using a single emitter
Energy Technology Data Exchange (ETDEWEB)
Li, Jian [Arizona State Univ., Tempe, AZ (United States). School of Mechanical, Aerospace, Chemical and Materials Engineering
2016-01-18
The ultimate objective of this project was to demonstrate an efficient and stable white OLED using a single emitter on a planar glass substrate. The focus of the project is on the development of efficient and stable square planar phosphorescent emitters and evaluation of such class of materials in the device settings. Key challenges included improving the emission efficiency of molecular dopants and excimers, controlling emission color of emitters and their excimers, and improving optical and electrical stability of emissive dopants. At the end of this research program, the PI has made enough progress to demonstrate the potential of excimer-based white OLED as a cost-effective solution for WOLED panel in the solid state lighting applications.
Overcoming Barriers in Unhealthy Settings
Directory of Open Access Journals (Sweden)
Michael K. Lemke
2016-03-01
Full Text Available We investigated the phenomenon of sustained health-supportive behaviors among long-haul commercial truck drivers, who belong to an occupational segment with extreme health disparities. With a focus on setting-level factors, this study sought to discover ways in which individuals exhibit resiliency while immersed in endemically obesogenic environments, as well as understand setting-level barriers to engaging in health-supportive behaviors. Using a transcendental phenomenological research design, 12 long-haul truck drivers who met screening criteria were selected using purposeful maximum sampling. Seven broad themes were identified: access to health resources, barriers to health behaviors, recommended alternative settings, constituents of health behavior, motivation for health behaviors, attitude toward health behaviors, and trucking culture. We suggest applying ecological theories of health behavior and settings approaches to improve driver health. We also propose the Integrative and Dynamic Healthy Commercial Driving (IDHCD paradigm, grounded in complexity science, as a new theoretical framework for improving driver health outcomes.
International Nuclear Information System (INIS)
Worrell, R.B.
1985-05-01
The Set Equation Transformation System (SETS) is used to achieve the symbolic manipulation of Boolean equations. Symbolic manipulation involves changing equations from their original forms into more useful forms - particularly by applying Boolean identities. The SETS program is an interpreter which reads, interprets, and executes SETS user programs. The user writes a SETS user program specifying the processing to be achieved and submits it, along with the required data, for execution by SETS. Because of the general nature of SETS, i.e., the capability to manipulate Boolean equations regardless of their origin, the program has been used for many different kinds of analysis
Ballooning stable high beta tokamak equilibria
International Nuclear Information System (INIS)
Tuda, Takashi; Azumi, Masafumi; Kurita, Gen-ichi; Takizuka, Tomonori; Takeda, Tatsuoki
1981-04-01
The second stable regime of ballooning modes is numerically studied by using the two-dimensional tokamak transport code with the ballooning stability code. Using the simple FCT heating scheme, we find that the plasma can locally enter this second stable regime. And we obtained equilibria with fairly high beta (β -- 23%) stable against ballooning modes in a whole plasma region, by taking into account of finite thermal diffusion due to unstable ballooning modes. These results show that a tokamak fusion reactor can operate in a high beta state, which is economically favourable. (author)
Development of stable isotope manufacturing in Russia
International Nuclear Information System (INIS)
Pokidychev, A.; Pokidycheva, M.
1999-01-01
For the past 25 years, Russia has relied heavily on the electromagnetic separation process for the production of middle and heavy mass stable isotopes. The separation of most light isotopes had been centered in Georgia which, after the collapse of the USSR, left Russia without this capability. In the mid-1970s, development of centrifuge technology for the separation of stable isotopes was begun. Alternative techniques such as laser separation, physical-chemical methods, and ion cyclotron resonance have also been investigated. Economic considerations have played a major role in the development and current status of the stable isotope enrichment capabilities of Russia
Directory of Open Access Journals (Sweden)
Ning-Cong Xiao
2013-12-01
Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.
International Nuclear Information System (INIS)
Jana, Debaldev; Agrawal, Rashmi; Upadhyay, Ranjit Kumar; Samanta, G.P.
2016-01-01
Highlights: • Age-selective harvesting of prey and predator are considered by multi-delayed prey-predator system. • System experiences stable coexistence to oscillatory mode and vice versa via Hopf-bifurcation depending upon the parametric restrictions. • MSY, bionomic equilibrium and optimal harvesting policy are also depending upon the age-selection of prey and predator. • All the analytic results are delay dependent. • Numerical examples support the analytical findings. - Abstract: Life history of ecological resource management and empirical studies are increasingly documenting the impact of selective harvesting process on the evolutionary stable strategy of both aquatic and terrestrial ecosystems. In the present study, the interaction between population and their independent and combined selective harvesting are framed by a multi-delayed prey-predator system. Depending upon the age selection strategy, system experiences stable coexistence to oscillatory mode and vice versa via Hopf-bifurcation. Economic evolution of the system which is mainly featured by maximum sustainable yield (MSY), bionomic equilibrium and optimal harvesting vary largely with the commensurate age selections of both population because equilibrium population abundance becomes age-selection dependent. Our study indicates that balance between harvesting delays and harvesting intensities should be maintained for better ecosystem management. Numerical examples support the analytical findings.
Multifield stochastic particle production: beyond a maximum entropy ansatz
Energy Technology Data Exchange (ETDEWEB)
Amin, Mustafa A.; Garcia, Marcos A.G.; Xie, Hong-Yi; Wen, Osmond, E-mail: mustafa.a.amin@gmail.com, E-mail: marcos.garcia@rice.edu, E-mail: hxie39@wisc.edu, E-mail: ow4@rice.edu [Physics and Astronomy Department, Rice University, 6100 Main Street, Houston, TX 77005 (United States)
2017-09-01
We explore non-adiabatic particle production for N {sub f} coupled scalar fields in a time-dependent background with stochastically varying effective masses, cross-couplings and intervals between interactions. Under the assumption of weak scattering per interaction, we provide a framework for calculating the typical particle production rates after a large number of interactions. After setting up the framework, for analytic tractability, we consider interactions (effective masses and cross couplings) characterized by series of Dirac-delta functions in time with amplitudes and locations drawn from different distributions. Without assuming that the fields are statistically equivalent, we present closed form results (up to quadratures) for the asymptotic particle production rates for the N {sub f}=1 and N {sub f}=2 cases. We also present results for the general N {sub f} >2 case, but with more restrictive assumptions. We find agreement between our analytic results and direct numerical calculations of the total occupation number of the produced particles, with departures that can be explained in terms of violation of our assumptions. We elucidate the precise connection between the maximum entropy ansatz (MEA) used in Amin and Baumann (2015) and the underlying statistical distribution of the self and cross couplings. We provide and justify a simple to use (MEA-inspired) expression for the particle production rate, which agrees with our more detailed treatment when the parameters characterizing the effective mass and cross-couplings between fields are all comparable to each other. However, deviations are seen when some parameters differ significantly from others. We show that such deviations become negligible for a broad range of parameters when N {sub f}>> 1.
A new record of the Paleocene Carbon Isotope Maximum from the Mississippi Embayment
Platt, B. F.; Gerweck, E. D.
2017-12-01
The Paleocene-Eocene interval is well known as a time of climatic transitions, especially hyperthermals associated with disturbances in the carbon cycle that are used as proxies for impacts of projected anthropogenic global climate change. A recent roadcut in Benton County, Mississippi exposes a disconformity between the Paleocene Naheola Formation and the Eocene Meridian Sand. The disconformity is developed on a thick, kaolinitic paleosol, which we interpret as a mature Oxisol that supported tropical rainforest vegetation (as evidenced by associated well preserved leaf fossils). The nature of the paleosol at the disconformity led us to hypothesize that the strata might contain evidence of the Paleocene Eocene Thermal Maximum (PETM). We sampled two Mississippi Mineral Resources Institute (MMRI) cores from the equivalent stratigraphic interval from Benton and Tippah Counties, Mississippi, for bulk organic carbon stable isotopes at 25-cm intervals. Results showed no evidence of the negative excursion characteristic of the PETM. Instead, we found a gradual upsection enrichment that we interpret as the positive trend characteristic of the lower Paleocene Carbon Isotope Maximum (PCIM). This is reasonable based on published biostratigraphy and absolute ages from elsewhere in the Naheola Formation. Further analyses will be performed to determine whether the PCIM trend continues throughout the remainder of the core. The identification of the PCIM in Mississippi Embayment (ME) sediments is important because stable carbon isotope data may be useful for improving chronostratigraphy in the ME. Also, the PCIM is associated with a gradual warming trend as indicated by previously published stable oxygen isotopes from benthic foraminifera. Studying successive ME paleosols throughout the PCIM may yield information about the impacts of gradual atmospheric warming on soils and associated terrestrial systems.
Maximum likelihood positioning algorithm for high-resolution PET scanners
International Nuclear Information System (INIS)
Gross-Weege, Nicolas; Schug, David; Hallen, Patrick; Schulz, Volkmar
2016-01-01
Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods: The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II D PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML
49 CFR 195.406 - Maximum operating pressure.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...
78 FR 49370 - Inflation Adjustment of Maximum Forfeiture Penalties
2013-08-14
... ``civil monetary penalties provided by law'' at least once every four years. DATES: Effective September 13... increases the maximum civil monetary forfeiture penalties available to the Commission under its rules... maximum civil penalties established in that section to account for inflation since the last adjustment to...
22 CFR 201.67 - Maximum freight charges.
2010-04-01
..., commodity rate classification, quantity, vessel flag category (U.S.-or foreign-flag), choice of ports, and... the United States. (2) Maximum charter rates. (i) USAID will not finance ocean freight under any... owner(s). (4) Maximum liner rates. USAID will not finance ocean freight for a cargo liner shipment at a...
Maximum penetration level of distributed generation without violating voltage limits
Morren, J.; Haan, de S.W.H.
2009-01-01
Connection of Distributed Generation (DG) units to a distribution network will result in a local voltage increase. As there will be a maximum on the allowable voltage increase, this will limit the maximum allowable penetration level of DG. By reactive power compensation (by the DG unit itself) a
Particle Swarm Optimization Based of the Maximum Photovoltaic ...
African Journals Online (AJOL)
Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency. In this work, a Particle Swarm ...
Maximum-entropy clustering algorithm and its global convergence analysis
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Application of maximum entropy to neutron tunneling spectroscopy
International Nuclear Information System (INIS)
Mukhopadhyay, R.; Silver, R.N.
1990-01-01
We demonstrate the maximum entropy method for the deconvolution of high resolution tunneling data acquired with a quasielastic spectrometer. Given a precise characterization of the instrument resolution function, a maximum entropy analysis of lutidine data obtained with the IRIS spectrometer at ISIS results in an effective factor of three improvement in resolution. 7 refs., 4 figs
The regulation of starch accumulation in Panicum maximum Jacq ...
African Journals Online (AJOL)
... decrease the starch level. These observations are discussed in relation to the photosynthetic characteristics of P. maximum. Keywords: accumulation; botany; carbon assimilation; co2 fixation; growth conditions; mesophyll; metabolites; nitrogen; nitrogen levels; nitrogen supply; panicum maximum; plant physiology; starch; ...
32 CFR 842.35 - Depreciation and maximum allowances.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide” to...
The maximum significant wave height in the Southern North Sea
Bouws, E.; Tolman, H.L.; Holthuijsen, L.H.; Eldeberky, Y.; Booij, N.; Ferier, P.
1995-01-01
The maximum possible wave conditions along the Dutch coast, which seem to be dominated by the limited water depth, have been estimated in the present study with numerical simulations. Discussions with meteorologists suggest that the maximum possible sustained wind speed in North Sea conditions is
5 CFR 838.711 - Maximum former spouse survivor annuity.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the amount...
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Directory of Open Access Journals (Sweden)
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Stable Isotope Group 1982 progress report
International Nuclear Information System (INIS)
Stewart, M.K.
1983-06-01
The work of the Stable Isotope Group of the Institute of Nuclear Sciences during 1982, in the fields of isotope geology, isotope hydrology, geochronology, isotope biology and mass spectrometer instrumentation, is described
Bartolome Island, Galapagos Stable Oxygen Calibration Data
National Oceanic and Atmospheric Administration, Department of Commerce — Galapagos Coral Stable Oxygen Calibration Data. Sites: Bartolome Island: 0 deg, 17'S, 90 deg 33' W. Champion Island: 1 deg, 15'S, 90 deg, 05' W. Urvina Bay (Isabela...
Stable Isotope Group 1983 progress report
International Nuclear Information System (INIS)
Stewart, M.K.
1984-06-01
The work of the Stable Isotope Group of the Institute of Nuclear Sciences in the fields of isotope geology, isotope hydrology, geochronology, isotope biology and related fields, and mass spectrometer instrumentation, during 1983, is described
Directory of Open Access Journals (Sweden)
Bruno Barras
2010-01-01
Full Text Available This work is about formalizing models of various type theories of the Calculus of Constructions family. Here we focus on set theoretical models. The long-term goal is to build a formal set theoretical model of the Calculus of Inductive Constructions, so we can be sure that Coq is consistent with the language used by most mathematicians.One aspect of this work is to axiomatize several set theories: ZF possibly with inaccessible cardinals, and HF, the theory of hereditarily finite sets. On top of these theories we have developped a piece of the usual set theoretical construction of functions, ordinals and fixpoint theory. We then proved sound several models of the Calculus of Constructions, its extension with an infinite hierarchy of universes, and its extension with the inductive type of natural numbers where recursion follows the type-based termination approach.The other aspect is to try and discharge (most of these assumptions. The goal here is rather to compare the theoretical strengths of all these formalisms. As already noticed by Werner, the replacement axiom of ZF in its general form seems to require a type-theoretical axiom of choice (TTAC.
Stable atomic hydrogen: Polarized atomic beam source
International Nuclear Information System (INIS)
Niinikoski, T.O.; Penttilae, S.; Rieubland, J.M.; Rijllart, A.
1984-01-01
We have carried out experiments with stable atomic hydrogen with a view to possible applications in polarized targets or polarized atomic beam sources. Recent results from the stabilization apparatus are described. The first stable atomic hydrogen beam source based on the microwave extraction method (which is being tested ) is presented. The effect of the stabilized hydrogen gas density on the properties of the source is discussed. (orig.)
Morozov, Albert D; Dragunov, Timothy N; Malysheva, Olga V
1999-01-01
This book deals with the visualization and exploration of invariant sets (fractals, strange attractors, resonance structures, patterns etc.) for various kinds of nonlinear dynamical systems. The authors have created a special Windows 95 application called WInSet, which allows one to visualize the invariant sets. A WInSet installation disk is enclosed with the book.The book consists of two parts. Part I contains a description of WInSet and a list of the built-in invariant sets which can be plotted using the program. This part is intended for a wide audience with interests ranging from dynamical
On the Extension Complexity of Stable Set Polytopes for Perfect Graphs
H. Hu (Hao)
2015-01-01
htmlabstractIn linear programming one can formulate many combinatorial optimization problems as optimizing a linear function over a feasible region that is a polytope. Given a polytope P, any non-redundant description of P contains precisely one inequality for each facet. A polytope
Multi-Objective Evaluation of Target Sets for Logistics Networks
National Research Council Canada - National Science Library
Emslie, Paul
2000-01-01
.... In the presence of many objectives--such as reducing maximum flow, lengthening routes, avoiding collateral damage, all at minimal risk to our pilots--the problem of determining the best target set is complex...
Does combined strength training and local vibration improve isometric maximum force? A pilot study.
Goebel, Ruben; Haddad, Monoem; Kleinöder, Heinz; Yue, Zengyuan; Heinen, Thomas; Mester, Joachim
2017-01-01
The aim of the study was to determine whether a combination of strength training (ST) and local vibration (LV) improved the isometric maximum force of arm flexor muscles. ST was applied to the left arm of the subjects; LV was applied to the right arm of the same subjects. The main aim was to examine the effect of LV during a dumbbell biceps curl (Scott Curl) on isometric maximum force of the opposite muscle among the same subjects. It is hypothesized, that the intervention with LV produces a greater gain in isometric force of the arm flexors than ST. Twenty-seven collegiate students participated in the study. The training load was 70% of the individual 1 RM. Four sets with 12 repetitions were performed three times per week during four weeks. The right arm of all subjects represented the vibration trained body side (VS) and the left arm served as the traditional trained body side (TTS). A significant increase of isometric maximum force in both body sides (Arms) occurred. VS, however, significantly increased isometric maximum force about 43% in contrast to 22% of the TTS. The combined intervention of ST and LC improves isometric maximum force of arm flexor muscles. III.
On the quirks of maximum parsimony and likelihood on phylogenetic networks.
Bryant, Christopher; Fischer, Mareike; Linz, Simone; Semple, Charles
2017-03-21
Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogenetic networks, which can display such events, are becoming of more and more interest in phylogenetic research. It is therefore necessary to extend concepts like maximum parsimony from phylogenetic trees to networks. Several suggestions for possible extensions can be found in recent literature, for instance the softwired and the hardwired parsimony concepts. In this paper, we analyze the so-called big parsimony problem under these two concepts, i.e. we investigate maximum parsimonious networks and analyze their properties. In particular, we show that finding a softwired maximum parsimony network is possible in polynomial time. We also show that the set of maximum parsimony networks for the hardwired definition always contains at least one phylogenetic tree. Lastly, we investigate some parallels of parsimony to different likelihood concepts on phylogenetic networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
A practical exact maximum compatibility algorithm for reconstruction of recent evolutionary history.
Cherry, Joshua L
2017-02-23
Maximum compatibility is a method of phylogenetic reconstruction that is seldom applied to molecular sequences. It may be ideal for certain applications, such as reconstructing phylogenies of closely-related bacteria on the basis of whole-genome sequencing. Here I present an algorithm that rapidly computes phylogenies according to a compatibility criterion. Although based on solutions to the maximum clique problem, this algorithm deals properly with ambiguities in the data. The algorithm is applied to bacterial data sets containing up to nearly 2000 genomes with several thousand variable nucleotide sites. Run times are several seconds or less. Computational experiments show that maximum compatibility is less sensitive than maximum parsimony to the inclusion of nucleotide data that, though derived from actual sequence reads, has been identified as likely to be misleading. Maximum compatibility is a useful tool for certain phylogenetic problems, such as inferring the relationships among closely-related bacteria from whole-genome sequence data. The algorithm presented here rapidly solves fairly large problems of this type, and provides robustness against misleading characters than can pollute large-scale sequencing data.
A long-term stable power supply μDMFC stack for wireless sensor node applications
International Nuclear Information System (INIS)
Wu, Z L; Wang, X H; Teng, F; Li, X Z; Wu, X M; Liu, L T
2013-01-01
A passive, air-breathing 4-cell micro direct methanol fuel cell (μDMFC) stack is presented featured by a fuel delivery structure for a long-term and stable power supply. The fuel is reserved in a T shape tank and diffuses through the porous diffusion layer to the catalyst at anode. The stack has a maximum power output of 110mW with 3M methanol at room temperature and output a stable power even thought 5% fuel is the remained in reservoir. Its performance decreases less than 3% for 100 hours continuous work. As such, it is believed to be more applicable for powering the wireless sensor nodes
2013-02-12
... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...
U.S. Department of Health & Human Services — The VSAC provides downloadable access to all official versions of vocabulary value sets contained in the 2014 Clinical Quality Measures (CQMs). Each value set...
Settings for Suicide Prevention
... Suicide Populations Racial/Ethnic Groups Older Adults Adolescents LGBT Military/Veterans Men Effective Prevention Comprehensive Approach Identify ... Based Prevention Settings American Indian/Alaska Native Settings Schools Colleges and Universities Primary Care Emergency Departments Behavioral ...
Directory of Open Access Journals (Sweden)
Lester L. Yuan
2007-06-01
Full Text Available This paper provides a brief introduction to the R package bio.infer, a set of scripts that facilitates the use of maximum likelihood (ML methods for predicting environmental conditions from assemblage composition. Environmental conditions can often be inferred from only biological data, and these inferences are useful when other sources of data are unavailable. ML prediction methods are statistically rigorous and applicable to a broader set of problems than more commonly used weighted averaging techniques. However, ML methods require a substantially greater investment of time to program algorithms and to perform computations. This package is designed to reduce the effort required to apply ML prediction methods.
International Nuclear Information System (INIS)
Hong, Chih-Ming; Ou, Ting-Chia; Lu, Kai-Hung
2013-01-01
A hybrid power control system is proposed in the paper, consisting of solar power, wind power, and a diesel-engine. To achieve a fast and stable response for the real power control, an intelligent controller was proposed, which consists of the Wilcoxon (radial basis function network) RBFN and the improved (Elman neural network) ENN for (maximum power point tracking) MPPT. The pitch angle control of wind power uses improved ENN controller, and the output is fed to the wind turbine to achieve the MPPT. The solar array is integrated with an RBFN control algorithm to track the maximum power. MATLAB (MATrix LABoratory)/Simulink was used to build the dynamic model and simulate the solar and diesel-wind hybrid power system. - Highlights: ► To achieve a fast and stable response for the real power control. ► The pitch control of wind power uses improved ENN (Elman neural network) controller to achieve the MPPT (maximum power point tracking). ► The RBFN (radial basis function network) can quickly and accurately track the maximum power output for PV (photovoltaic) array. ► MATLAB was used to build the dynamic model and simulate the hybrid power system. ► This method can reach the desired performance even under different load conditions
Mechanisms of stable lipid loss in a social insect
Ament, Seth A.; Chan, Queenie W.; Wheeler, Marsha M.; Nixon, Scott E.; Johnson, S. Peir; Rodriguez-Zas, Sandra L.; Foster, Leonard J.; Robinson, Gene E.
2011-01-01
SUMMARY Worker honey bees undergo a socially regulated, highly stable lipid loss as part of their behavioral maturation. We used large-scale transcriptomic and proteomic experiments, physiological experiments and RNA interference to explore the mechanistic basis for this lipid loss. Lipid loss was associated with thousands of gene expression changes in abdominal fat bodies. Many of these genes were also regulated in young bees by nutrition during an initial period of lipid gain. Surprisingly, in older bees, which is when maximum lipid loss occurs, diet played less of a role in regulating fat body gene expression for components of evolutionarily conserved nutrition-related endocrine systems involving insulin and juvenile hormone signaling. By contrast, fat body gene expression in older bees was regulated more strongly by evolutionarily novel regulatory factors, queen mandibular pheromone (a honey bee-specific social signal) and vitellogenin (a conserved yolk protein that has evolved novel, maturation-related functions in the bee), independent of nutrition. These results demonstrate that conserved molecular pathways can be manipulated to achieve stable lipid loss through evolutionarily novel regulatory processes. PMID:22031746
Mechanisms of stable lipid loss in a social insect.
Ament, Seth A; Chan, Queenie W; Wheeler, Marsha M; Nixon, Scott E; Johnson, S Peir; Rodriguez-Zas, Sandra L; Foster, Leonard J; Robinson, Gene E
2011-11-15
Worker honey bees undergo a socially regulated, highly stable lipid loss as part of their behavioral maturation. We used large-scale transcriptomic and proteomic experiments, physiological experiments and RNA interference to explore the mechanistic basis for this lipid loss. Lipid loss was associated with thousands of gene expression changes in abdominal fat bodies. Many of these genes were also regulated in young bees by nutrition during an initial period of lipid gain. Surprisingly, in older bees, which is when maximum lipid loss occurs, diet played less of a role in regulating fat body gene expression for components of evolutionarily conserved nutrition-related endocrine systems involving insulin and juvenile hormone signaling. By contrast, fat body gene expression in older bees was regulated more strongly by evolutionarily novel regulatory factors, queen mandibular pheromone (a honey bee-specific social signal) and vitellogenin (a conserved yolk protein that has evolved novel, maturation-related functions in the bee), independent of nutrition. These results demonstrate that conserved molecular pathways can be manipulated to achieve stable lipid loss through evolutionarily novel regulatory processes.
International Nuclear Information System (INIS)
Yadav, Anju; Rani, Mamta
2015-01-01
Alternate Julia sets have been studied in Picard iterative procedures. The purpose of this paper is to study the quadratic and cubic maps using superior iterates to obtain Julia sets with different alternate structures. Analytically, graphically and computationally it has been shown that alternate superior Julia sets can be connected, disconnected and totally disconnected, and also fattier than the corresponding alternate Julia sets. A few examples have been studied by applying different type of alternate structures
Simulation model of ANN based maximum power point tracking controller for solar PV system
Energy Technology Data Exchange (ETDEWEB)
Rai, Anil K.; Singh, Bhupal [Department of Electrical and Electronics Engineering, Ajay Kumar Garg Engineering College, Ghaziabad 201009 (India); Kaushika, N.D.; Agarwal, Niti [School of Research and Development, Bharati Vidyapeeth College of Engineering, A-4 Paschim Vihar, New Delhi 110063 (India)
2011-02-15
In this paper the simulation model of an artificial neural network (ANN) based maximum power point tracking controller has been developed. The controller consists of an ANN tracker and the optimal control unit. The ANN tracker estimates the voltages and currents corresponding to a maximum power delivered by solar PV (photovoltaic) array for variable cell temperature and solar radiation. The cell temperature is considered as a function of ambient air temperature, wind speed and solar radiation. The tracker is trained employing a set of 124 patterns using the back propagation algorithm. The mean square error of tracker output and target values is set to be of the order of 10{sup -5} and the successful convergent of learning process takes 1281 epochs. The accuracy of the ANN tracker has been validated by employing different test data sets. The control unit uses the estimates of the ANN tracker to adjust the duty cycle of the chopper to optimum value needed for maximum power transfer to the specified load. (author)
Parameters determining maximum wind velocity in a tropical cyclone
International Nuclear Information System (INIS)
Choudhury, A.M.
1984-09-01
The spiral structure of a tropical cyclone was earlier explained by a tangential velocity distribution which varies inversely as the distance from the cyclone centre outside the circle of maximum wind speed. The case has been extended in the present paper by adding a radial velocity. It has been found that a suitable combination of radial and tangential velocities can account for the spiral structure of a cyclone. This enables parametrization of the cyclone. Finally a formula has been derived relating maximum velocity in a tropical cyclone with angular momentum, radius of maximum wind speed and the spiral angle. The shapes of the spirals have been computed for various spiral angles. (author)
Active Fault Tolerant Control of Livestock Stable Ventilation System
DEFF Research Database (Denmark)
Gholami, Mehdi
2011-01-01
Modern stables and greenhouses are equipped with different components for providing a comfortable climate for animals and plant. A component malfunction may result in loss of production. Therefore, it is desirable to design a control system, which is stable, and is able to provide an acceptable d...... are not included, while due to the physical limitation, the input signal can not have any value. In continuing, a passive fault tolerant controller (PFTC) based on state feedback is proposed to track a reference signal while the control inputs are bounded....... of fault. Designing a fault tolerant control scheme for the climate control system. In the first step, a conceptual multi-zone model for climate control of a live-stock building is derived. The model is a nonlinear hybrid model. Hybrid systems contain both discrete and continuous components. The parameters...... affine (PWA) components such as dead-zones, saturation, etc or contain piecewise nonlinear models which is the case for the climate control systems of the stables. Fault tolerant controller (FTC) is based on a switching scheme between a set of predefined passive fault tolerant controller (PFTC...
Deposition of radionuclides and stable elements in Tokai-mura
Energy Technology Data Exchange (ETDEWEB)
Ueno, Takashi; Amano, Hikaru [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2003-03-01
This report presents the data of deposition of radionuclides (Sep. 1993-March 2001) and stable elements (Sep. 1993-Oct. 1995) in Tokai-mura. To evaluate the migration of radionuclides and stable elements from the atmosphere to the ground surface, atmospheric deposition samples were collected from Sep. 1993 to March 2001 with three basins (distance to grand surface were 1.5 m, 4 m, 10 m) set up in the enclosure of JAERI in Tokai-mura, Ibaraki-ken, Japan. Monthly samples were evaporated to dryness to obtain residual samples and measured with a well type Ge detector for {sup 7}Be, {sup 40}K, {sup 137}Cs and {sup 210}Pb. According to the analysis of radioactivity, clear seasonal variations with spring peaks of deposition weight (dry) and deposition amounts of all objective radionuclides were found. Correlation analysis of deposition data also showed that these radionuclides can be divided into two groups. A part of dried sample was irradiated to reactor neutrons at JRR-4 for determination of stable element's deposition. (author)
Baker, Mark; Beltran, Jane; Buell, Jason; Conrey, Brian; Davis, Tom; Donaldson, Brianna; Detorre-Ozeki, Jeanne; Dibble, Leila; Freeman, Tom; Hammie, Robert; Montgomery, Julie; Pickford, Avery; Wong, Justine
2013-01-01
Sets in the game "Set" are lines in a certain four-dimensional space. Here we introduce planes into the game, leading to interesting mathematical questions, some of which we solve, and to a wonderful variation on the game "Set," in which every tableau of nine cards must contain at least one configuration for a player to pick up.
Suppes, Patrick
1972-01-01
This clear and well-developed approach to axiomatic set theory is geared toward upper-level undergraduates and graduate students. It examines the basic paradoxes and history of set theory and advanced topics such as relations and functions, equipollence, finite sets and cardinal numbers, rational and real numbers, and other subjects. 1960 edition.
DEFF Research Database (Denmark)
Rodríguez, J. Tinguaro; Franco de los Ríos, Camilo; Gómez, Daniel
2015-01-01
In this paper we want to stress the relevance of paired fuzzy sets, as already proposed in previous works of the authors, as a family of fuzzy sets that offers a unifying view for different models based upon the opposition of two fuzzy sets, simply allowing the existence of different types...
Enderton, Herbert B
1977-01-01
This is an introductory undergraduate textbook in set theory. In mathematics these days, essentially everything is a set. Some knowledge of set theory is necessary part of the background everyone needs for further study of mathematics. It is also possible to study set theory for its own interest--it is a subject with intruiging results anout simple objects. This book starts with material that nobody can do without. There is no end to what can be learned of set theory, but here is a beginning.
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC Education | GIS Inventory — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
Probabilistic maximum-value wind prediction for offshore environments
DEFF Research Database (Denmark)
Staid, Andrea; Pinson, Pierre; Guikema, Seth D.
2015-01-01
statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...... the full probabilistic distribution of maximum wind speed. Knowledge of the maximum wind speed for an offshore location within a given period can inform decision-making regarding turbine operations, planned maintenance operations and power grid scheduling in order to improve safety and reliability...
Combining Experiments and Simulations Using the Maximum Entropy Principle
DEFF Research Database (Denmark)
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
Parametric optimization of thermoelectric elements footprint for maximum power generation
DEFF Research Database (Denmark)
Rezania, A.; Rosendahl, Lasse; Yin, Hao
2014-01-01
The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost......-performance, and variation of efficiency in the uni-couple over a wide range of the heat transfer coefficient on the cold junction. The three-dimensional (3D) governing equations of the thermoelectricity and the heat transfer are solved using the finite element method (FEM) for temperature dependent properties of TE...... materials. The results, which are in good agreement with the previous computational studies, show that the maximum power generation and the maximum cost-performance in the module occur at An/Ap
Ethylene Production Maximum Achievable Control Technology (MACT) Compliance Manual
This July 2006 document is intended to help owners and operators of ethylene processes understand and comply with EPA's maximum achievable control technology standards promulgated on July 12, 2002, as amended on April 13, 2005 and April 20, 2006.
ORIGINAL ARTICLES Surgical practice in a maximum security prison
African Journals Online (AJOL)
Prison Clinic, Mangaung Maximum Security Prison, Bloemfontein. F Kleinhans, BA (Cur) .... HIV positivity rate and the use of the rectum to store foreign objects. ... fruit in sunlight. Other positive health-promoting factors may also play a role,.
A technique for estimating maximum harvesting effort in a stochastic ...
Indian Academy of Sciences (India)
Unknown
Estimation of maximum harvesting effort has a great impact on the ... fluctuating environment has been developed in a two-species competitive system, which shows that under realistic .... The existence and local stability properties of the equi-.
Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)
U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...
Post optimization paradigm in maximum 3-satisfiability logic programming
Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd
2017-08-01
Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach
Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.
2012-01-01
This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous
DEFF Research Database (Denmark)
Vatrapu, Ravi; Mukkamala, Raghava Rao; Hussain, Abid
2016-01-01
, conceptual and formal models of social data, and an analytical framework for combining big social data sets with organizational and societal data sets. Three empirical studies of big social data are presented to illustrate and demonstrate social set analysis in terms of fuzzy set-theoretical sentiment...... automata and agent-based modeling). However, when it comes to organizational and societal units of analysis, there exists no approach to conceptualize, model, analyze, explain, and predict social media interactions as individuals' associations with ideas, values, identities, and so on. To address...... analysis, crisp set-theoretical interaction analysis, and event-studies-oriented set-theoretical visualizations. Implications for big data analytics, current limitations of the set-theoretical approach, and future directions are outlined....
Maximum entropy deconvolution of low count nuclear medicine images
International Nuclear Information System (INIS)
McGrath, D.M.
1998-12-01
Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were
Multiple stable isotope fronts during non-isothermal fluid flow
Fekete, Szandra; Weis, Philipp; Scott, Samuel; Driesner, Thomas
2018-02-01
Stable isotope signatures of oxygen, hydrogen and other elements in minerals from hydrothermal veins and metasomatized host rocks are widely used to investigate fluid sources and paths. Previous theoretical studies mostly focused on analyzing stable isotope fronts developing during single-phase, isothermal fluid flow. In this study, numerical simulations were performed to assess how temperature changes, transport phenomena, kinetic vs. equilibrium isotope exchange, and isotopic source signals determine mineral oxygen isotopic compositions during fluid-rock interaction. The simulations focus on one-dimensional scenarios, with non-isothermal single- and two-phase fluid flow, and include the effects of quartz precipitation and dissolution. If isotope exchange between fluid and mineral is fast, a previously unrecognized, significant enrichment in heavy oxygen isotopes of fluids and minerals occurs at the thermal front. The maximum enrichment depends on the initial isotopic composition of fluid and mineral, the fluid-rock ratio and the maximum change in temperature, but is independent of the isotopic composition of the incoming fluid. This thermally induced isotope front propagates faster than the signal related to the initial isotopic composition of the incoming fluid, which forms a trailing front behind the zone of transient heavy oxygen isotope enrichment. Temperature-dependent kinetic rates of isotope exchange between fluid and rock strongly influence the degree of enrichment at the thermal front. In systems where initial isotope values of fluids and rocks are far from equilibrium and isotope fractionation is controlled by kinetics, the temperature increase accelerates the approach of the fluid to equilibrium conditions with the host rock. Consequently, the increase at the thermal front can be less dominant and can even generate fluid values below the initial isotopic composition of the input fluid. As kinetics limit the degree of isotope exchange, a third front may
Metabolic studies in man using stable isotopes
International Nuclear Information System (INIS)
Faust, H.; Jung, K.; Krumbiegel, P.
1993-01-01
In this project, stable isotope compounds and stable isotope pharmaceuticals were used (with emphasis on the application of 15 N) to study several aspects of nitrogen metabolism in man. Of the many methods available, the 15 N stable isotope tracer technique holds a special position because the methodology for application and nitrogen isotope analysis is proven and reliable. Valid routine methods using 15 N analysis by emission spectrometry have been demonstrated. Several methods for the preparation of biological material were developed during our participation in the Coordinated Research Programme. In these studies, direct procedures (i.e. use of diluted urine as a samples without chemical preparation) or rapid isolation methods were favoured. Within the scope of the Analytical Quality Control Service (AQCS) enriched stable isotope reference materials for medical and biological studies were prepared and are now available through the International Atomic Energy Agency. The materials are of special importance as the increasing application of stable isotopes as tracers in medical, biological and agricultural studies has focused interest on reliable measurements of biological material of different origin. 24 refs
What controls the maximum magnitude of injection-induced earthquakes?
Eaton, D. W. S.
2017-12-01
Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum
Maximum organic carbon limits at different melter feed rates (U)
International Nuclear Information System (INIS)
Choi, A.S.
1995-01-01
This report documents the results of a study to assess the impact of varying melter feed rates on the maximum total organic carbon (TOC) limits allowable in the DWPF melter feed. Topics discussed include: carbon content; feed rate; feed composition; melter vapor space temperature; combustion and dilution air; off-gas surges; earlier work on maximum TOC; overview of models; and the results of the work completed
A tropospheric ozone maximum over the equatorial Southern Indian Ocean
Directory of Open Access Journals (Sweden)
L. Zhang
2012-05-01
Full Text Available We examine the distribution of tropical tropospheric ozone (O_{3} from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O_{3} during 2005 to 2009 reveal a distinct, persistent O_{3} maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O_{3} observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O_{3} maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O_{3} maximum is dominated by the O_{3} production driven by lightning nitrogen oxides (NO_{x} emissions, which accounts for 62% of the tropospheric column O_{3} in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O_{3} maximum are rather small. The O_{3} productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O_{3} maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O_{3} maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.
Dinosaur Metabolism and the Allometry of Maximum Growth Rate
Myhrvold, Nathan P.
2016-01-01
The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth...
MAXIMUM PRINCIPLE FOR SUBSONIC FLOW WITH VARIABLE ENTROPY
Directory of Open Access Journals (Sweden)
B. Sizykh Grigory
2017-01-01
Full Text Available Maximum principle for subsonic flow is fair for stationary irrotational subsonic gas flows. According to this prin- ciple, if the value of the velocity is not constant everywhere, then its maximum is achieved on the boundary and only on the boundary of the considered domain. This property is used when designing form of an aircraft with a maximum critical val- ue of the Mach number: it is believed that if the local Mach number is less than unit in the incoming flow and on the body surface, then the Mach number is less then unit in all points of flow. The known proof of maximum principle for subsonic flow is based on the assumption that in the whole considered area of the flow the pressure is a function of density. For the ideal and perfect gas (the role of diffusion is negligible, and the Mendeleev-Clapeyron law is fulfilled, the pressure is a function of density if entropy is constant in the entire considered area of the flow. Shows an example of a stationary sub- sonic irrotational flow, in which the entropy has different values on different stream lines, and the pressure is not a function of density. The application of the maximum principle for subsonic flow with respect to such a flow would be unreasonable. This example shows the relevance of the question about the place of the points of maximum value of the velocity, if the entropy is not a constant. To clarify the regularities of the location of these points, was performed the analysis of the com- plete Euler equations (without any simplifying assumptions in 3-D case. The new proof of the maximum principle for sub- sonic flow was proposed. This proof does not rely on the assumption that the pressure is a function of density. Thus, it is shown that the maximum principle for subsonic flow is true for stationary subsonic irrotational flows of ideal perfect gas with variable entropy.
On semidefinite programming relaxations of maximum k-section
de Klerk, E.; Pasechnik, D.V.; Sotirov, R.; Dobre, C.
2012-01-01
We derive a new semidefinite programming bound for the maximum k -section problem. For k=2 (i.e. for maximum bisection), the new bound is at least as strong as a well-known bound by Poljak and Rendl (SIAM J Optim 5(3):467–487, 1995). For k ≥ 3the new bound dominates a bound of Karisch and Rendl
Direct maximum parsimony phylogeny reconstruction from genotype data
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2007-01-01
Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of ge...
Maximum spectral demands in the near-fault region
Huang, Yin-Nan; Whittaker, Andrew S.; Luco, Nicolas
2008-01-01
The Next Generation Attenuation (NGA) relationships for shallow crustal earthquakes in the western United States predict a rotated geometric mean of horizontal spectral demand, termed GMRotI50, and not maximum spectral demand. Differences between strike-normal, strike-parallel, geometric-mean, and maximum spectral demands in the near-fault region are investigated using 147 pairs of records selected from the NGA strong motion database. The selected records are for earthquakes with moment magnitude greater than 6.5 and for closest site-to-fault distance less than 15 km. Ratios of maximum spectral demand to NGA-predicted GMRotI50 for each pair of ground motions are presented. The ratio shows a clear dependence on period and the Somerville directivity parameters. Maximum demands can substantially exceed NGA-predicted GMRotI50 demands in the near-fault region, which has significant implications for seismic design, seismic performance assessment, and the next-generation seismic design maps. Strike-normal spectral demands are a significantly unconservative surrogate for maximum spectral demands for closest distance greater than 3 to 5 km. Scale factors that transform NGA-predicted GMRotI50 to a maximum spectral demand in the near-fault region are proposed.
New results on the mid-latitude midnight temperature maximum
Mesquita, Rafael L. A.; Meriwether, John W.; Makela, Jonathan J.; Fisher, Daniel J.; Harding, Brian J.; Sanders, Samuel C.; Tesema, Fasil; Ridley, Aaron J.
2018-04-01
Fabry-Perot interferometer (FPI) measurements of thermospheric temperatures and winds show the detection and successful determination of the latitudinal distribution of the midnight temperature maximum (MTM) in the continental mid-eastern United States. These results were obtained through the operation of the five FPI observatories in the North American Thermosphere Ionosphere Observing Network (NATION) located at the Pisgah Astronomic Research Institute (PAR) (35.2° N, 82.8° W), Virginia Tech (VTI) (37.2° N, 80.4° W), Eastern Kentucky University (EKU) (37.8° N, 84.3° W), Urbana-Champaign (UAO) (40.2° N, 88.2° W), and Ann Arbor (ANN) (42.3° N, 83.8° W). A new approach for analyzing the MTM phenomenon is developed, which features the combination of a method of harmonic thermal background removal followed by a 2-D inversion algorithm to generate sequential 2-D temperature residual maps at 30 min intervals. The simultaneous study of the temperature data from these FPI stations represents a novel analysis of the MTM and its large-scale latitudinal and longitudinal structure. The major finding in examining these maps is the frequent detection of a secondary MTM peak occurring during the early evening hours, nearly 4.5 h prior to the timing of the primary MTM peak that generally appears after midnight. The analysis of these observations shows a strong night-to-night variability for this double-peaked MTM structure. A statistical study of the behavior of the MTM events was carried out to determine the extent of this variability with regard to the seasonal and latitudinal dependence. The results show the presence of the MTM peak(s) in 106 out of the 472 determinable nights (when the MTM presence, or lack thereof, can be determined with certainty in the data set) selected for analysis (22 %) out of the total of 846 nights available. The MTM feature is seen to appear slightly more often during the summer (27 %), followed by fall (22 %), winter (20 %), and spring
International Nuclear Information System (INIS)
Park, Jae-Do; Lee, Hohyun; Bond, Matthew
2014-01-01
Highlights: • Feedforward MPPT scheme for uninterrupted TEG energy harvesting is suggested. • Temperature sensors are used to avoid current measurement or source disconnection. • MPP voltage reference is generated based on OCV vs. temperature differential model. • Optimal operating condition is maintained using hysteresis controller. • Any type of power converter can be used in the proposed scheme. - Abstract: In this paper, a thermoelectric generator (TEG) energy harvesting system with a temperature-sensor-based maximum power point tracking (MPPT) method is presented. Conventional MPPT algorithms for photovoltaic cells may not be suitable for thermoelectric power generation because a significant amount of time is required for TEG systems to reach a steady state. Moreover, complexity and additional power consumption in conventional circuits and periodic disconnection of power source are not desirable for low-power energy harvesting applications. The proposed system can track the varying maximum power point (MPP) with a simple and inexpensive temperature-sensor-based circuit without instantaneous power measurement or TEG disconnection. This system uses TEG’s open circuit voltage (OCV) characteristic with respect to temperature gradient to generate a proper reference voltage signal, i.e., half of the TEG’s OCV. The power converter controller maintains the TEG output voltage at the reference level so that the maximum power can be extracted for the given temperature condition. This feedforward MPPT scheme is inherently stable and can be implemented without any complex microcontroller circuit. The proposed system has been validated analytically and experimentally, and shows a maximum power tracking error of 1.15%
Perspective: Highly stable vapor-deposited glasses
Ediger, M. D.
2017-12-01
This article describes recent progress in understanding highly stable glasses prepared by physical vapor deposition and provides perspective on further research directions for the field. For a given molecule, vapor-deposited glasses can have higher density and lower enthalpy than any glass that can be prepared by the more traditional route of cooling a liquid, and such glasses also exhibit greatly enhanced kinetic stability. Because vapor-deposited glasses can approach the bottom of the amorphous part of the potential energy landscape, they provide insights into the properties expected for the "ideal glass." Connections between vapor-deposited glasses, liquid-cooled glasses, and deeply supercooled liquids are explored. The generality of stable glass formation for organic molecules is discussed along with the prospects for stable glasses of other types of materials.
Concentration of stable elements in food products
International Nuclear Information System (INIS)
Montford, M.A.; Shank, K.E.; Hendricks, C.; Oakes, T.W.
1980-01-01
Food samples were taken from commercial markets and analyzed for stable element content. The concentrations of most stable elements (Ag, Al, As, Au, Ba, Br, Ca, Ce, Cl, Co, Cr, Cs, Cu, Fe, Hf, I, K, La, Mg, Mn, Mo, Na, Rb, Sb, Sc, Se, Sr, Ta, Th, Ti, V, Zn, Zr) were determined using multiple-element neutron activation analysis, while the concentrations of other elements (Cd, Hg, Ni, Pb) were determined using atomic absorption. The relevance of the concentrations found are noted in relation to other literature values. An earlier study was extended to include the determination of the concentration of stable elements in home-grown products in the vicinity of the Oak Ridge National Laboratory. Comparisons between the commercial and local food-stuff values are discussed
Faster and Simpler Approximation of Stable Matchings
Directory of Open Access Journals (Sweden)
Katarzyna Paluch
2014-04-01
Full Text Available We give a 3 2 -approximation algorithm for finding stable matchings that runs in O(m time. The previous most well-known algorithm, by McDermid, has the same approximation ratio but runs in O(n3/2m time, where n denotes the number of people andm is the total length of the preference lists in a given instance. In addition, the algorithm and the analysis are much simpler. We also give the extension of the algorithm for computing stable many-to-many matchings.
Moving stable solitons in Galileon theory
International Nuclear Information System (INIS)
Masoumi, Ali; Xiao Xiao
2012-01-01
Despite the no-go theorem Endlich et al. (2011) which rules out static stable solitons in Galileon theory, we propose a family of solitons that evade the theorem by traveling at the speed of light. These domain-wall-like solitons are stable under small fluctuations-analysis of perturbation shows neither ghost-like nor tachyon-like instabilities, and perturbative collision of these solitons suggests that they pass through each other asymptotically, which maybe an indication of the integrability of the theory itself.
Bordism, stable homotopy and adams spectral sequences
Kochman, Stanley O
1996-01-01
This book is a compilation of lecture notes that were prepared for the graduate course "Adams Spectral Sequences and Stable Homotopy Theory" given at The Fields Institute during the fall of 1995. The aim of this volume is to prepare students with a knowledge of elementary algebraic topology to study recent developments in stable homotopy theory, such as the nilpotence and periodicity theorems. Suitable as a text for an intermediate course in algebraic topology, this book provides a direct exposition of the basic concepts of bordism, characteristic classes, Adams spectral sequences, Brown-Peter
Stable isotopes in Lithuanian bioarcheological material
Skipityte, Raminta; Jankauskas, Rimantas; Remeikis, Vidmantas
2015-04-01
Investigation of bioarcheological material of ancient human populations allows us to understand the subsistence behavior associated with various adaptations to the environment. Feeding habits are essential to the survival and growth of ancient populations. Stable isotope analysis is accepted tool in paleodiet (Schutkowski et al, 1999) and paleoenvironmental (Zernitskaya et al, 2014) studies. However, stable isotopes can be useful not only in investigating human feeding habits but also in describing social and cultural structure of the past populations (Le Huray and Schutkowski, 2005). Only few stable isotope investigations have been performed before in Lithuanian region suggesting a quite uniform diet between males and females and protein intake from freshwater fish and animal protein. Previously, stable isotope analysis has only been used to study a Stone Age population however, more recently studies have been conducted on Iron Age and Late medieval samples (Jacobs et al, 2009). Anyway, there was a need for more precise examination. Stable isotope analysis were performed on human bone collagen and apatite samples in this study. Data represented various ages (from 5-7th cent. to 18th cent.). Stable carbon and nitrogen isotope analysis on medieval populations indicated that individuals in studied sites in Lithuania were almost exclusively consuming C3 plants, C3 fed terrestrial animals, and some freshwater resources. Current investigation demonstrated social differences between elites and country people and is promising in paleodietary and daily life reconstruction. Acknowledgement I thank prof. dr. G. Grupe, Director of the Anthropological and Palaeoanatomical State Collection in Munich for providing the opportunity to work in her laboratory. The part of this work was funded by DAAD. Antanaitis-Jacobs, Indre, et al. "Diet in early Lithuanian prehistory and the new stable isotope evidence." Archaeologia Baltica 12 (2009): 12-30. Le Huray, Jonathan D., and Holger
Unconditionally stable microwave Si-IMPATT amplifiers
International Nuclear Information System (INIS)
Seddik, M.M.
1986-07-01
The purpose of this investigation has been the development of an improved understanding of the design and analysis of microwave reflection amplifiers employing the negative resistance property of the IMPATT devices. Unconditionally stable amplifier circuit using a Silicon IMPATT diode is designed. The problems associated with the design procedures and the stability criterion are discussed. A computer program is developed to perform the computations. The stable characteristics of a reflection-type Si-IMPATT amplifier, such as gain, frequency and bandwidth are examined. It was found that at large signal drive levels, 7 dB gain with bandwidth of 800 MHz at 22,5 mA was obtained. (author)
The Local Maximum Clustering Method and Its Application in Microarray Gene Expression Data Analysis
Directory of Open Access Journals (Sweden)
Chen Yidong
2004-01-01
Full Text Available An unsupervised data clustering method, called the local maximum clustering (LMC method, is proposed for identifying clusters in experiment data sets based on research interest. A magnitude property is defined according to research purposes, and data sets are clustered around each local maximum of the magnitude property. By properly defining a magnitude property, this method can overcome many difficulties in microarray data clustering such as reduced projection in similarities, noises, and arbitrary gene distribution. To critically evaluate the performance of this clustering method in comparison with other methods, we designed three model data sets with known cluster distributions and applied the LMC method as well as the hierarchic clustering method, the -mean clustering method, and the self-organized map method to these model data sets. The results show that the LMC method produces the most accurate clustering results. As an example of application, we applied the method to cluster the leukemia samples reported in the microarray study of Golub et al. (1999.
An Approximate Proximal Bundle Method to Minimize a Class of Maximum Eigenvalue Functions
Directory of Open Access Journals (Sweden)
Wei Wang
2014-01-01
Full Text Available We present an approximate nonsmooth algorithm to solve a minimization problem, in which the objective function is the sum of a maximum eigenvalue function of matrices and a convex function. The essential idea to solve the optimization problem in this paper is similar to the thought of proximal bundle method, but the difference is that we choose approximate subgradient and function value to construct approximate cutting-plane model to solve the above mentioned problem. An important advantage of the approximate cutting-plane model for objective function is that it is more stable than cutting-plane model. In addition, the approximate proximal bundle method algorithm can be given. Furthermore, the sequences generated by the algorithm converge to the optimal solution of the original problem.
A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation
Directory of Open Access Journals (Sweden)
Shu Cai
2016-12-01
Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.
Mu, Yang; Yang, Hou-Yun; Wang, Ya-Zhou; He, Chuan-Shu; Zhao, Quan-Bao; Wang, Yi; Yu, Han-Qing
2014-06-10
Fermentative hydrogen production from wastes has many advantages compared to various chemical methods. Methodology for characterizing the hydrogen-producing activity of anaerobic mixed cultures is essential for monitoring reactor operation in fermentative hydrogen production, however there is lack of such kind of standardized methodologies. In the present study, a new index, i.e., the maximum specific hydrogen-producing activity (SHAm) of anaerobic mixed cultures, was proposed, and consequently a reliable and simple method, named SHAm test, was developed to determine it. Furthermore, the influences of various parameters on the SHAm value determination of anaerobic mixed cultures were evaluated. Additionally, this SHAm assay was tested for different types of substrates and bacterial inocula. Our results demonstrate that this novel SHAm assay was a rapid, accurate and simple methodology for determining the hydrogen-producing activity of anaerobic mixed cultures. Thus, application of this approach is beneficial to establishing a stable anaerobic hydrogen-producing system.
Mu, Yang; Yang, Hou-Yun; Wang, Ya-Zhou; He, Chuan-Shu; Zhao, Quan-Bao; Wang, Yi; Yu, Han-Qing
2014-06-01
Fermentative hydrogen production from wastes has many advantages compared to various chemical methods. Methodology for characterizing the hydrogen-producing activity of anaerobic mixed cultures is essential for monitoring reactor operation in fermentative hydrogen production, however there is lack of such kind of standardized methodologies. In the present study, a new index, i.e., the maximum specific hydrogen-producing activity (SHAm) of anaerobic mixed cultures, was proposed, and consequently a reliable and simple method, named SHAm test, was developed to determine it. Furthermore, the influences of various parameters on the SHAm value determination of anaerobic mixed cultures were evaluated. Additionally, this SHAm assay was tested for different types of substrates and bacterial inocula. Our results demonstrate that this novel SHAm assay was a rapid, accurate and simple methodology for determining the hydrogen-producing activity of anaerobic mixed cultures. Thus, application of this approach is beneficial to establishing a stable anaerobic hydrogen-producing system.
Avinash-Shukla mass limit for the maximum dust mass supported against gravity by electric fields
Avinash, K.
2010-08-01
The existence of a new class of astrophysical objects, where gravity is balanced by the shielded electric fields associated with the electric charge on the dust, is shown. Further, a mass limit MA for the maximum dust mass that can be supported against gravitational collapse by these fields is obtained. If the total mass of the dust in the interstellar cloud MD > MA, the dust collapses, while if MD < MA, stable equilibrium may be achieved. Heuristic arguments are given to show that the physics of the mass limit is similar to the Chandrasekar's mass limit for compact objects and the similarity of these dust configurations with neutron and white dwarfs is pointed out. The effect of grain size distribution on the mass limit and strong correlation effects in the core of such objects is discussed. Possible location of these dust configurations inside interstellar clouds is pointed out.
Simulation of maximum light use efficiency for some typical vegetation types in China
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Maximum light use efficiency (εmax) is a key parameter for the estimation of net primary productivity (NPP) derived from remote sensing data. There are still many divergences about its value for each vegetation type. The εmax for some typical vegetation types in China is simulated using a modified least squares function based on NOAA/AVHRR remote sensing data and field-observed NPP data. The vegetation classification accuracy is introduced to the process. The sensitivity analysis of εmax to vegetation classification accuracy is also conducted. The results show that the simulated values of εmax are greater than the value used in CASA model, and less than the values simulated with BIOME-BGC model. This is consistent with some other studies. The relative error of εmax resulting from classification accuracy is -5.5%―8.0%. This indicates that the simulated values of εmax are reliable and stable.
K. Nawata (Kazumitsu); M.J. McAleer (Michael)
2013-01-01
markdownabstract__Abstract__ Hausman (1978) developed a widely-used model specification test that has passed the test of time. The test is based on two estimators, one being consistent under the null hypothesis but inconsistent under the alternative, and the other being consistent under both the
K. Nawata (Kazumitsu); M.J. McAleer (Michael)
2013-01-01
markdownabstract__Abstract__ Hausman (1978) developed a widely-used model specification test that has passed the test of time. The test is based on two estimators, one being consistent under the null hypothesis but inconsistent under the alternative, and the other being consistent under both the
Energy Technology Data Exchange (ETDEWEB)
Briere, R [Commissariat a l' Energie Atomique, Grenoble (France). Centre d' Etudes Nucleaires, Laboratoire de chimie organique physique
1967-06-01
A new synthesis of di-tert-butyl nitroxide using the reaction between tert-butyl magnesium chloride and nitro-tert-butane is presented in the first section. Synthesis and investigation of stable free piperidine-N-oxyl radicals are described in the second section. All these nitroxides have been characterised by their I. R., U. V. and E. P. R. absorption spectra. The final section contains a description of the synthesis of a stable bi-radical of the nitroxide type by condensation of 2,2, 6, 6-tetramethyl-piperid-4-one-l-oxyl with hydrazine. (author) [French] La premiere partie expose une nouvelle methode de synthase du di-t-butyl nitroxyde, par action d'halogenures de t-butyl magnesium sur le nitro-t-butane (Rdt maximum 45 pour cert, purete 86 pour cent). Une etude de radicaux. libres stables pipericliniques est faite dans une seconde partie. Ces composes sont obtenus par oxydation de derives de la triacetonamine. Les caracteristiques spectroscopiques ultra-violette, infra-rouge, et paramagnetique electronique de ces radicaux sont donnees. La grande inertie chimique du groupement nitroxyde a permis la syn-these d'un biradical stable par formation d'azine d'une cetone radicalaire, ce qui fait 1'objet de la troisieme partie. (auteur)
Log-stable concentration distributions of trace elements in biomedical samples
International Nuclear Information System (INIS)
Kubala-Kukus, A.; Kuternoga, E.; Braziewicz, J.; Pajek, M.
2004-01-01
In the present paper, which follows our earlier observation that the asymmetric and long-tailed concentration distributions of trace elements in biomedical samples, measured by the X-ray fluorescence techniques, can be modeled by the log-stable distributions, further specific aspects of this observation are discussed. First, we demonstrate that, typically, for a quite substantial fraction (10-20%) of trace elements studied in different kinds of biomedical samples, the measured concentration distributions are described in fact by the 'symmetric' log-stable distributions, i.e. the asymmetric distributions which are described by the symmetric stable distributions. This observation is, in fact, expected for the random multiplicative process, which models the concentration distributions of trace elements in the biomedical samples. The log-stable nature of concentration distribution of trace elements results in several problems of statistical nature, which have to be addressed in XRF data analysis practice. Consequently, in the present paper, the following problems, namely (i) the estimation of parameters for stable distributions and (ii) the testing of the log-stable nature of the concentration distribution by using the Anderson-Darling (A 2 ) test, especially for symmetric stable distributions, are discussed in detail. In particular, the maximum likelihood estimation and Monte Carlo simulation techniques were used, respectively, for estimation of stable distribution parameters and calculation of the critical values for the Anderson-Darling test. The discussed ideas are exemplified by the results of the study of trace element concentration distributions in selected biomedical samples, which were obtained by using the X-ray fluorescence (XRF, TXRF) methods
Maximum vehicle cabin temperatures under different meteorological conditions
Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John
2009-05-01
A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.
Fractal Dimension and Maximum Sunspot Number in Solar Cycle
Directory of Open Access Journals (Sweden)
R.-S. Kim
2006-09-01
Full Text Available The fractal dimension is a quantitative parameter describing the characteristics of irregular time series. In this study, we use this parameter to analyze the irregular aspects of solar activity and to predict the maximum sunspot number in the following solar cycle by examining time series of the sunspot number. For this, we considered the daily sunspot number since 1850 from SIDC (Solar Influences Data analysis Center and then estimated cycle variation of the fractal dimension by using Higuchi's method. We examined the relationship between this fractal dimension and the maximum monthly sunspot number in each solar cycle. As a result, we found that there is a strong inverse relationship between the fractal dimension and the maximum monthly sunspot number. By using this relation we predicted the maximum sunspot number in the solar cycle from the fractal dimension of the sunspot numbers during the solar activity increasing phase. The successful prediction is proven by a good correlation (r=0.89 between the observed and predicted maximum sunspot numbers in the solar cycles.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.; Ito, N.
2013-01-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
International Nuclear Information System (INIS)
Wang, Chao; Chen, Lingen; Xia, Shaojun; Sun, Fengrui
2016-01-01
A sulphuric acid decomposition process in a tubular plug-flow reactor with fixed inlet flow rate and completely controllable exterior wall temperature profile and reactants pressure profile is studied in this paper by using finite-time thermodynamics. The maximum production rate of the aimed product SO 2 and the optimal exterior wall temperature profile and reactants pressure profile are obtained by using nonlinear programming method. Then the optimal reactor with the maximum production rate is compared with the reference reactor with linear exterior wall temperature profile and the optimal reactor with minimum entropy generation rate. The result shows that the production rate of SO 2 of optimal reactor with the maximum production rate has an increase of more than 7%. The optimization of temperature profile has little influence on the production rate while the optimization of reactants pressure profile can significantly increase the production rate. The results obtained may provide some guidelines for the design of real tubular reactors. - Highlights: • Sulphuric acid decomposition process in tubular plug-flow reactor is studied. • Fixed inlet flow rate and controllable temperature and pressure profiles are set. • Maximum production rate of aimed product SO 2 is obtained. • Corresponding optimal temperature and pressure profiles are derived. • Production rate of SO 2 of optimal reactor increases by 7%.
petrography, compositional characteristics and stable isotope ...
African Journals Online (AJOL)
PROF EKWUEME
Subsurface samples of the predominantly carbonate Ewekoro Formation, obtained from Ibese core hole within the Dahomey basin were used in this study. Investigations entail petrographic, elemental composition as well as stable isotopes (carbon and oxygen) geochemistry in order to deduce the different microfacies and ...
Working conditions remain stable in the Netherlands
Houtman, I.; Hooftman, W.
2008-01-01
Despite significant changes in the national questionnaires on work and health, the quality of work as well as health complaints in the Netherlands appear to be relatively stable. Pace of work seems to be on the increase again and more people are working in excess of their contractual hours.
Thermally stable sintered porous metal articles
International Nuclear Information System (INIS)
Gombach, A.L.; Thellmann, E.L.
1980-01-01
A sintered porous metal article is provided which is essentially thermally stable at elevated temperatures. In addition, a method for producing such an article is also provided which method comprises preparing a blend of base metal particles and active dispersoid particles, forming the mixture into an article of the desired shape, and heating the so-formed article at sintering temperatures
TOF for heavy stable particle identification
International Nuclear Information System (INIS)
Chang, C.Y.
1983-01-01
Searching for heavy stable particle production in a new energy region of hadron-hadron collisions is of fundamental theoretical interest. Observation of such particles produced in high energy collisions would indicate the existence of stable heavy leptons or any massive hadronic system carrying new quantum numbers. Experimentally, evidence of its production has not been found for PP collisions either at FNAL or at the CERN ISR for √S = 23 and 62 GeV respectively. However, many theories beyond the standard model do predict its existence on a mass scale ranging from 50 to a few hundred GeV. If so, it would make a high luminosity TeV collider an extremely ideal hunting ground for searching the production of such a speculated object. To measure the mass of a heavy stable charged particle, one usually uses its time of flight (TOF) and/or dE/dX information. For heavy neutral particle, one hopes it may decay at some later time after its production. Hence a pair of jets or a jet associated with a high P/sub t/ muon originated from some places other than the interacting point (IP) of the colliding beams may be a good signal. In this note, we examine the feasibility of TOF measurement on a heavy stable particle produced in PP collisions at √S = 1 TeV and a luminosity of 10 33 cm -2 sec -1 with a single arm spectrometer pointing to the IP
Axisymmetric MHD stable sloshing ion distributions
International Nuclear Information System (INIS)
Berk, H.L.; Dominguez, N.; Roslyakov, G.V.
1986-07-01
The MHD stability of a sloshing ion distribution is investigated in a symmetric mirror cell. Fokker-Planck calculations show that stable configurations are possible for ion injection energies that are at least 150 times greater than the electron temperture. Special axial magnetic field profiles are suggested to optimize the favorable MHD properties
Unconditionally stable integration of Maxwell's equations
Verwer, J.G.; Bochev, Mikhail A.
Numerical integration of Maxwell's equations is often based on explicit methods accepting a stability step size restriction. In literature evidence is given that there is also a need for unconditionally stable methods, as exemplified by the successful alternating direction implicit finite difference
Unconditionally stable integration of Maxwell's equations
J.G. Verwer (Jan); M.A. Botchev
2008-01-01
htmlabstractNumerical integration of Maxwell''s equations is often based on explicit methods accepting a stability step size restriction. In literature evidence is given that there is also a need for unconditionally stable methods, as exemplified by the successful alternating direction
Unconditionally stable integration of Maxwell's equations
J.G. Verwer (Jan); M.A. Botchev
2009-01-01
textabstractNumerical integration of Maxwell’s equations is often based on explicit methods accepting a stability step size restriction. In literature evidence is given that there is also a need for unconditionally stable methods, as exemplified by the successful alternating direction implicit –
Method of producing thermally stable uranium carbonitrides
International Nuclear Information System (INIS)
Ugajin, M.; Takahashi, I.
1975-01-01
A thermally stable uranium carbonitride can be produced by adding tungsten and/or molybdenum in the amount of 0.2 wt percent or more, preferably 0.5 wt percent or more, to a pure uranium carbonitride. (U.S.)
Champion Island, Galapagos Stable Oxygen Calibration Data
National Oceanic and Atmospheric Administration, Department of Commerce — Galapagos Coral Stable Oxygen Calibration Data. Sites: Bartolome Island: 0 deg, 17 min S, 90 deg 33 min W. Champion Island: 1 deg, 15 min S, 90 deg, 05 min W. Urvina...
26 S proteasomes function as stable entities
DEFF Research Database (Denmark)
Hendil, Klavs B; Hartmann-Petersen, Rasmus; Tanaka, Keiji
2002-01-01
, shuttles between a free state and the 26-S proteasome, bringing substrate to the complex. However, S5a was not found in the free state in HeLa cells. Besides, all subunits in PA700, including S5a, exchanged at similar low rates. It therefore seems that 26-S proteasomes function as stable entities during...
Formal derivation of a stable marriage algorithm.
Bijlsma, A.
1991-01-01
In this paper the well-known Stable Marriage Problem is considered once again. The name of this programming problem comes from the terms in which it was first described [2]: A certain community consists of n men and n women. Each person ranks those of the opposite sex in accordance with his or
Czech Academy of Sciences Publication Activity Database
Doležal, Martin; Rmoutil, M.; Vejnar, B.; Vlasák, V.
2016-01-01
Roč. 440, č. 2 (2016), s. 922-939 ISSN 0022-247X Institutional support: RVO:67985840 Keywords : Haar meager set * Haar null set * Polish group Subject RIV: BA - General Mathematics Impact factor: 1.064, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022247X1600305X
Setting goals in psychotherapy
DEFF Research Database (Denmark)
Emiliussen, Jakob; Wagoner, Brady
2013-01-01
The present study is concerned with the ethical dilemmas of setting goals in therapy. The main questions that it aims to answer are: who is to set the goals for therapy and who is to decide when they have been reached? The study is based on four semi-‐structured, phenomenological interviews...
Barasz, Kate; John, Leslie K; Keenan, Elizabeth A; Norton, Michael I
2017-10-01
Pseudo-set framing-arbitrarily grouping items or tasks together as part of an apparent "set"-motivates people to reach perceived completion points. Pseudo-set framing changes gambling choices (Study 1), effort (Studies 2 and 3), giving behavior (Field Data and Study 4), and purchase decisions (Study 5). These effects persist in the absence of any reward, when a cost must be incurred, and after participants are explicitly informed of the arbitrariness of the set. Drawing on Gestalt psychology, we develop a conceptual account that predicts what will-and will not-act as a pseudo-set, and defines the psychological process through which these pseudo-sets affect behavior: over and above typical reference points, pseudo-set framing alters perceptions of (in)completeness, making intermediate progress seem less complete. In turn, these feelings of incompleteness motivate people to persist until the pseudo-set has been fulfilled. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Moschovakis, YN
1987-01-01
Now available in paperback, this monograph is a self-contained exposition of the main results and methods of descriptive set theory. It develops all the necessary background material from logic and recursion theory, and treats both classical descriptive set theory and the effective theory developed by logicians.
Directory of Open Access Journals (Sweden)
Shawkat Alkhazaleh
2011-01-01
Full Text Available We introduce the concept of possibility fuzzy soft set and its operation and study some of its properties. We give applications of this theory in solving a decision-making problem. We also introduce a similarity measure of two possibility fuzzy soft sets and discuss their application in a medical diagnosis problem.
Archaeological predictive model set.
2015-03-01
This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...
Leemans, I.B.; Broomhall, Susan
2017-01-01
Digital emotion research has yet to make history. Until now large data set mining has not been a very active ﬁeld of research in early modern emotion studies. This is indeed surprising since ﬁrst, the early modern ﬁeld has such rich, copyright-free, digitized data sets and second, emotion studies
Stroud, Wesley
2018-01-01
All educators want their classrooms to be inviting areas that support investigations. However, a common mistake is to fill learning spaces with items or objects that are set up by the teacher or are simply "for show." This type of setting, although it may create a comfortable space for students, fails to stimulate investigations and…