WorldWideScience

Sample records for maximum sized sets

  1. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic.

    Science.gov (United States)

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.

  2. Tutte sets in graphs II: The complexity of finding maximum Tutte sets

    NARCIS (Netherlands)

    Bauer, D.; Broersma, Haitze J.; Kahl, N.; Morgana, A.; Schmeichel, E.; Surowiec, T.

    2007-01-01

    A well-known formula of Tutte and Berge expresses the size of a maximum matching in a graph $G$ in terms of what is usually called the deficiency. A subset $X$ of $V(G)$ for which this deficiency is attained is called a Tutte set of $G$. While much is known about maximum matchings, less is known

  3. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  4. Mechanical limits to maximum weapon size in a giant rhinoceros beetle.

    Science.gov (United States)

    McCullough, Erin L

    2014-07-07

    The horns of giant rhinoceros beetles are a classic example of the elaborate morphologies that can result from sexual selection. Theory predicts that sexual traits will evolve to be increasingly exaggerated until survival costs balance the reproductive benefits of further trait elaboration. In Trypoxylus dichotomus, long horns confer a competitive advantage to males, yet previous studies have found that they do not incur survival costs. It is therefore unlikely that horn size is limited by the theoretical cost-benefit equilibrium. However, males sometimes fight vigorously enough to break their horns, so mechanical limits may set an upper bound on horn size. Here, I tested this mechanical limit hypothesis by measuring safety factors across the full range of horn sizes. Safety factors were calculated as the ratio between the force required to break a horn and the maximum force exerted on a horn during a typical fight. I found that safety factors decrease with increasing horn length, indicating that the risk of breakage is indeed highest for the longest horns. Structural failure of oversized horns may therefore oppose the continued exaggeration of horn length driven by male-male competition and set a mechanical limit on the maximum size of rhinoceros beetle horns. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  5. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  6. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.; Ito, N.

    2013-01-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  7. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.

    2013-10-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  8. Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.

    Science.gov (United States)

    Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N

    2014-01-01

    Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites  =  approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.

  9. Analyzing ROC curves using the effective set-size model

    Science.gov (United States)

    Samuelson, Frank W.; Abbey, Craig K.; He, Xin

    2018-03-01

    The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical

  10. Comparing fishers' and scientific estimates of size at maturity and maximum body size as indicators for overfishing.

    Science.gov (United States)

    Mclean, Elizabeth L; Forrester, Graham E

    2018-04-01

    We tested whether fishers' local ecological knowledge (LEK) of two fish life-history parameters, size at maturity (SAM) at maximum body size (MS), was comparable to scientific estimates (SEK) of the same parameters, and whether LEK influenced fishers' perceptions of sustainability. Local ecological knowledge was documented for 82 fishers from a small-scale fishery in Samaná Bay, Dominican Republic, whereas SEK was compiled from the scientific literature. Size at maturity estimates derived from LEK and SEK overlapped for most of the 15 commonly harvested species (10 of 15). In contrast, fishers' maximum size estimates were usually lower than (eight species), or overlapped with (five species) scientific estimates. Fishers' size-based estimates of catch composition indicate greater potential for overfishing than estimates based on SEK. Fishers' estimates of size at capture relative to size at maturity suggest routine inclusion of juveniles in the catch (9 of 15 species), and fishers' estimates suggest that harvested fish are substantially smaller than maximum body size for most species (11 of 15 species). Scientific estimates also suggest that harvested fish are generally smaller than maximum body size (13 of 15), but suggest that the catch is dominated by adults for most species (9 of 15 species), and that juveniles are present in the catch for fewer species (6 of 15). Most Samaná fishers characterized the current state of their fishery as poor (73%) and as having changed for the worse over the past 20 yr (60%). Fishers stated that concern about overfishing, catching small fish, and catching immature fish contributed to these perceptions, indicating a possible influence of catch-size composition on their perceptions. Future work should test this link more explicitly because we found no evidence that the minority of fishers with more positive perceptions of their fishery reported systematically different estimates of catch-size composition than those with the more

  11. Dependence of US hurricane economic loss on maximum wind speed and storm size

    International Nuclear Information System (INIS)

    Zhai, Alice R; Jiang, Jonathan H

    2014-01-01

    Many empirical hurricane economic loss models consider only wind speed and neglect storm size. These models may be inadequate in accurately predicting the losses of super-sized storms, such as Hurricane Sandy in 2012. In this study, we examined the dependences of normalized US hurricane loss on both wind speed and storm size for 73 tropical cyclones that made landfall in the US from 1988 through 2012. A multi-variate least squares regression is used to construct a hurricane loss model using both wind speed and size as predictors. Using maximum wind speed and size together captures more variance of losses than using wind speed or size alone. It is found that normalized hurricane loss (L) approximately follows a power law relation with maximum wind speed (V max ) and size (R), L = 10 c V max a R b , with c determining an overall scaling factor and the exponents a and b generally ranging between 4–12 and 2–4 respectively. Both a and b tend to increase with stronger wind speed. Hurricane Sandy’s size was about three times of the average size of all hurricanes analyzed. Based on the bi-variate regression model that explains the most variance for hurricanes, Hurricane Sandy’s loss would be approximately 20 times smaller if its size were of the average size with maximum wind speed unchanged. It is important to revise conventional empirical hurricane loss models that are only dependent on maximum wind speed to include both maximum wind speed and size as predictors. (letters)

  12. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.

    Science.gov (United States)

    Kim, Sehwi; Jung, Inkyung

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.

  13. The maximum sizes of large scale structures in alternative theories of gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, Sourav [IUCAA, Pune University Campus, Post Bag 4, Ganeshkhind, Pune, 411 007 India (India); Dialektopoulos, Konstantinos F. [Dipartimento di Fisica, Università di Napoli ' Federico II' , Complesso Universitario di Monte S. Angelo, Edificio G, Via Cinthia, Napoli, I-80126 Italy (Italy); Romano, Antonio Enea [Instituto de Física, Universidad de Antioquia, Calle 70 No. 52–21, Medellín (Colombia); Skordis, Constantinos [Department of Physics, University of Cyprus, 1 Panepistimiou Street, Nicosia, 2109 Cyprus (Cyprus); Tomaras, Theodore N., E-mail: sbhatta@iitrpr.ac.in, E-mail: kdialekt@gmail.com, E-mail: aer@phys.ntu.edu.tw, E-mail: skordis@ucy.ac.cy, E-mail: tomaras@physics.uoc.gr [Institute of Theoretical and Computational Physics and Department of Physics, University of Crete, 70013 Heraklion (Greece)

    2017-07-01

    The maximum size of a cosmic structure is given by the maximum turnaround radius—the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulae for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulae agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the ΛCDM value, by a factor 1 + 1/3ω, where ω>> 1 is the Brans-Dicke parameter, implying consistency of the theory with current data.

  14. Handelman's hierarchy for the maximum stable set problem

    NARCIS (Netherlands)

    Laurent, M.; Sun, Z.

    2014-01-01

    The maximum stable set problem is a well-known NP-hard problem in combinatorial optimization, which can be formulated as the maximization of a quadratic square-free polynomial over the (Boolean) hypercube. We investigate a hierarchy of linear programming relaxations for this problem, based on a

  15. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  16. A Fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation

    International Nuclear Information System (INIS)

    Li, Haisen S.; Romeijn, H. Edwin; Dempsey, James F.

    2006-01-01

    We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near mono-energetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the

  17. Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    S. Radhika

    2016-04-01

    Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.

  18. 50 CFR 697.21 - Gear identification and marking, escape vent, maximum trap size, and ghost panel requirements.

    Science.gov (United States)

    2010-10-01

    ... vent, maximum trap size, and ghost panel requirements. 697.21 Section 697.21 Wildlife and Fisheries... identification and marking, escape vent, maximum trap size, and ghost panel requirements. (a) Gear identification... Administrator finds to be consistent with paragraph (c) of this section. (d) Ghost panel. (1) Lobster traps not...

  19. Optimum detection for extracting maximum information from symmetric qubit sets

    International Nuclear Information System (INIS)

    Mizuno, Jun; Fujiwara, Mikio; Sasaki, Masahide; Akiba, Makoto; Kawanishi, Tetsuya; Barnett, Stephen M.

    2002-01-01

    We demonstrate a class of optimum detection strategies for extracting the maximum information from sets of equiprobable real symmetric qubit states of a single photon. These optimum strategies have been predicted by Sasaki et al. [Phys. Rev. A 59, 3325 (1999)]. The peculiar aspect is that the detections with at least three outputs suffice for optimum extraction of information regardless of the number of signal elements. The cases of ternary (or trine), quinary, and septenary polarization signals are studied where a standard von Neumann detection (a projection onto a binary orthogonal basis) fails to access the maximum information. Our experiments demonstrate that it is possible with present technologies to attain about 96% of the theoretical limit

  20. The limit distribution of the maximum increment of a random walk with dependent regularly varying jump sizes

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Moser, Martin

    2013-01-01

    We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable....

  1. An electromagnetism-like method for the maximum set splitting problem

    Directory of Open Access Journals (Sweden)

    Kratica Jozef

    2013-01-01

    Full Text Available In this paper, an electromagnetism-like approach (EM for solving the maximum set splitting problem (MSSP is applied. Hybrid approach consisting of the movement based on the attraction-repulsion mechanisms combined with the proposed scaling technique directs EM to promising search regions. Fast implementation of the local search procedure additionally improves the efficiency of overall EM system. The performance of the proposed EM approach is evaluated on two classes of instances from the literature: minimum hitting set and Steiner triple systems. The results show, except in one case, that EM reaches optimal solutions up to 500 elements and 50000 subsets on minimum hitting set instances. It also reaches all optimal/best-known solutions for Steiner triple systems.

  2. Characterizing graphs of maximum matching width at most 2

    DEFF Research Database (Denmark)

    Jeong, Jisu; Ok, Seongmin; Suh, Geewon

    2017-01-01

    The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...

  3. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  4. Maximum size-density relationships for mixed-hardwood forest stands in New England

    Science.gov (United States)

    Dale S. Solomon; Lianjun Zhang

    2000-01-01

    Maximum size-density relationships were investigated for two mixed-hardwood ecological types (sugar maple-ash and beech-red maple) in New England. Plots meeting type criteria and undergoing self-thinning were selected for each habitat. Using reduced major axis regression, no differences were found between the two ecological types. Pure species plots (the species basal...

  5. Multivariate modeling of complications with data driven variable selection: Guarding against overfitting and effects of data set size

    International Nuclear Information System (INIS)

    Schaaf, Arjen van der; Xu Chengjian; Luijk, Peter van; Veld, Aart A. van’t; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    Purpose: Multivariate modeling of complications after radiotherapy is frequently used in conjunction with data driven variable selection. This study quantifies the risk of overfitting in a data driven modeling method using bootstrapping for data with typical clinical characteristics, and estimates the minimum amount of data needed to obtain models with relatively high predictive power. Materials and methods: To facilitate repeated modeling and cross-validation with independent datasets for the assessment of true predictive power, a method was developed to generate simulated data with statistical properties similar to real clinical data sets. Characteristics of three clinical data sets from radiotherapy treatment of head and neck cancer patients were used to simulate data with set sizes between 50 and 1000 patients. A logistic regression method using bootstrapping and forward variable selection was used for complication modeling, resulting for each simulated data set in a selected number of variables and an estimated predictive power. The true optimal number of variables and true predictive power were calculated using cross-validation with very large independent data sets. Results: For all simulated data set sizes the number of variables selected by the bootstrapping method was on average close to the true optimal number of variables, but showed considerable spread. Bootstrapping is more accurate in selecting the optimal number of variables than the AIC and BIC alternatives, but this did not translate into a significant difference of the true predictive power. The true predictive power asymptotically converged toward a maximum predictive power for large data sets, and the estimated predictive power converged toward the true predictive power. More than half of the potential predictive power is gained after approximately 200 samples. Our simulations demonstrated severe overfitting (a predicative power lower than that of predicting 50% probability) in a number of small

  6. Determining the effect of grain size and maximum induction upon coercive field of electrical steels

    Science.gov (United States)

    Landgraf, Fernando José Gomes; da Silveira, João Ricardo Filipini; Rodrigues-Jr., Daniel

    2011-10-01

    Although theoretical models have already been proposed, experimental data is still lacking to quantify the influence of grain size upon coercivity of electrical steels. Some authors consider a linear inverse proportionality, while others suggest a square root inverse proportionality. Results also differ with regard to the slope of the reciprocal of grain size-coercive field relation for a given material. This paper discusses two aspects of the problem: the maximum induction used for determining coercive force and the possible effect of lurking variables such as the grain size distribution breadth and crystallographic texture. Electrical steel sheets containing 0.7% Si, 0.3% Al and 24 ppm C were cold-rolled and annealed in order to produce different grain sizes (ranging from 20 to 150 μm). Coercive field was measured along the rolling direction and found to depend linearly on reciprocal of grain size with a slope of approximately 0.9 (A/m)mm at 1.0 T induction. A general relation for coercive field as a function of grain size and maximum induction was established, yielding an average absolute error below 4%. Through measurement of B50 and image analysis of micrographs, the effects of crystallographic texture and grain size distribution breadth were qualitatively discussed.

  7. Comparison of Hounsfield units by changing in size of physical area and setting size of region of interest by using the CT phantom made with a 3D printer

    International Nuclear Information System (INIS)

    Seung, Youl Hun

    2015-01-01

    In this study, we have observed the change of the Hounsfield (HU) in the alteration of by changing in size of physical area and setting size of region of interest (ROI) at focus on kVp and mAs. Four-channel multi-detector computed tomography was used to get transverse axial scanning images and HU. Three dimensional printer which is type of fused deposition modeling (FDM) was used to produce the Phantom. The structure of the phantom was designed to be a type of cylinder that contains 33 mm, 24 mm, 19 mm, 16 mm, 9 mm size of circle holes that are symmetrically located. It was charged with mixing iodine contrast agent and distilled water in the holes. The images were gained with changing by 90 kVp, 120 kVp, 140 kVp and 50 mAs, 100 mAs, 150 mAs, respectively. The ‘image J’ was used to get the HU measurement of gained images of ROI. As a result, it was confirmed that kVp affects to HU more than mAs. And it is suggested that the smaller size of physical area, the more decreasing HU even in material of a uniform density and the smaller setting size of ROI, the more increasing HU. Therefore, it is reason that to set maximum ROI within 5 HU is the best way to minimize in the alteration of by changing in size of physical area and setting size of region of interest

  8. Comparison of Hounsfield units by changing in size of physical area and setting size of region of interest by using the CT phantom made with a 3D printer

    Energy Technology Data Exchange (ETDEWEB)

    Seung, Youl Hun [Dept. of Radiological Science, Cheongju University, Cheongju (Korea, Republic of)

    2015-12-15

    In this study, we have observed the change of the Hounsfield (HU) in the alteration of by changing in size of physical area and setting size of region of interest (ROI) at focus on kVp and mAs. Four-channel multi-detector computed tomography was used to get transverse axial scanning images and HU. Three dimensional printer which is type of fused deposition modeling (FDM) was used to produce the Phantom. The structure of the phantom was designed to be a type of cylinder that contains 33 mm, 24 mm, 19 mm, 16 mm, 9 mm size of circle holes that are symmetrically located. It was charged with mixing iodine contrast agent and distilled water in the holes. The images were gained with changing by 90 kVp, 120 kVp, 140 kVp and 50 mAs, 100 mAs, 150 mAs, respectively. The ‘image J’ was used to get the HU measurement of gained images of ROI. As a result, it was confirmed that kVp affects to HU more than mAs. And it is suggested that the smaller size of physical area, the more decreasing HU even in material of a uniform density and the smaller setting size of ROI, the more increasing HU. Therefore, it is reason that to set maximum ROI within 5 HU is the best way to minimize in the alteration of by changing in size of physical area and setting size of region of interest.

  9. Reduced oxygen at high altitude limits maximum size.

    Science.gov (United States)

    Peck, L S; Chapelle, G

    2003-11-07

    The trend towards large size in marine animals with latitude, and the existence of giant marine species in polar regions have long been recognized, but remained enigmatic until a recent study showed it to be an effect of increased oxygen availability in sea water of a low temperature. The effect was apparent in data from 12 sites worldwide because of variations in water oxygen content controlled by differences in temperature and salinity. Another major physical factor affecting oxygen content in aquatic environments is reduced pressure at high altitude. Suitable data from high-altitude sites are very scarce. However, an exceptionally rich crustacean collection, which remains largely undescribed, was obtained by the British 1937 expedition from Lake Titicaca on the border between Peru and Bolivia in the Andes at an altitude of 3809 m. We show that in Lake Titicaca the maximum length of amphipods is 2-4 times smaller than other low-salinity sites (Caspian Sea and Lake Baikal).

  10. Preliminarily study on the maximum handling size, prey size and species selectivity of growth hormone transgenic and non-transgenic common carp Cyprinus carpio when foraging on gastropods

    Science.gov (United States)

    Zhu, Tingbing; Zhang, Lihong; Zhang, Tanglin; Wang, Yaping; Hu, Wei; Olsen, Rolf Eric; Zhu, Zuoyan

    2017-10-01

    The present study preliminarily examined the differences in maximum handling size, prey size and species selectivity of growth hormone transgenic and non-transgenic common carp Cyprinus carpio when foraging on four gastropods species (Bellamya aeruginosa, Radix auricularia, Parafossarulus sinensis and Alocinma longicornis) under laboratory conditions. In the maximum handling size trial, five fish from each age group (1-year-old and 2-year-old) and each genotype (transgenic and non-transgenic) of common carp were individually allowed to feed on B. aeruginosa with wide shell height range. The results showed that maximum handling size increased linearly with fish length, and there was no significant difference in maximum handling size between the two genotypes. In the size selection trial, three pairs of 2-year-old transgenic and non-transgenic carp were individually allowed to feed on three size groups of B. aeruginosa. The results show that the two genotypes of C. carpio favored the small-sized group over the large-sized group. In the species selection trial, three pairs of 2-year-old transgenic and non-transgenic carp were individually allowed to feed on thin-shelled B. aeruginosa and thick-shelled R. auricularia, and five pairs of 2-year-old transgenic and non-transgenic carp were individually allowed to feed on two gastropods species (P. sinensis and A. longicornis) with similar size and shell strength. The results showed that both genotypes preferred thin-shelled Radix auricularia rather than thick-shelled B. aeruginosa, but there were no significant difference in selectivity between the two genotypes when fed on P. sinensis and A. longicornis. The present study indicates that transgenic and non-transgenic C. carpio show similar selectivity of predation on the size- and species-limited gastropods. While this information may be useful for assessing the environmental risk of transgenic carp, it does not necessarily demonstrate that transgenic common carp might

  11. The limit distribution of the maximum increment of a random walk with regularly varying jump size distribution

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Rackauskas, Alfredas

    2010-01-01

    In this paper, we deal with the asymptotic distribution of the maximum increment of a random walk with a regularly varying jump size distribution. This problem is motivated by a long-standing problem on change point detection for epidemic alternatives. It turns out that the limit distribution...... of the maximum increment of the random walk is one of the classical extreme value distributions, the Fréchet distribution. We prove the results in the general framework of point processes and for jump sizes taking values in a separable Banach space...

  12. Evaluating Fast Maximum Likelihood-Based Phylogenetic Programs Using Empirical Phylogenomic Data Sets

    Science.gov (United States)

    Zhou, Xiaofan; Shen, Xing-Xing; Hittinger, Chris Todd

    2018-01-01

    Abstract The sizes of the data matrices assembled to resolve branches of the tree of life have increased dramatically, motivating the development of programs for fast, yet accurate, inference. For example, several different fast programs have been developed in the very popular maximum likelihood framework, including RAxML/ExaML, PhyML, IQ-TREE, and FastTree. Although these programs are widely used, a systematic evaluation and comparison of their performance using empirical genome-scale data matrices has so far been lacking. To address this question, we evaluated these four programs on 19 empirical phylogenomic data sets with hundreds to thousands of genes and up to 200 taxa with respect to likelihood maximization, tree topology, and computational speed. For single-gene tree inference, we found that the more exhaustive and slower strategies (ten searches per alignment) outperformed faster strategies (one tree search per alignment) using RAxML, PhyML, or IQ-TREE. Interestingly, single-gene trees inferred by the three programs yielded comparable coalescent-based species tree estimations. For concatenation-based species tree inference, IQ-TREE consistently achieved the best-observed likelihoods for all data sets, and RAxML/ExaML was a close second. In contrast, PhyML often failed to complete concatenation-based analyses, whereas FastTree was the fastest but generated lower likelihood values and more dissimilar tree topologies in both types of analyses. Finally, data matrix properties, such as the number of taxa and the strength of phylogenetic signal, sometimes substantially influenced the programs’ relative performance. Our results provide real-world gene and species tree phylogenetic inference benchmarks to inform the design and execution of large-scale phylogenomic data analyses. PMID:29177474

  13. Tutte sets in graphs I: Maximal tutte sets and D-graphs

    NARCIS (Netherlands)

    Bauer, D.; Broersma, Haitze J.; Morgana, A.; Schmeichel, E.

    A well-known formula of Tutte and Berge expresses the size of a maximum matching in a graph $G$ in terms of what is usually called the deficiency of $G$. A subset $X$ of $V(G)$ for which this deficiency is attained is called a Tutte set of $G$. While much is known about maximum matchings, less is

  14. Study on Droplet Size and Velocity Distributions of a Pressure Swirl Atomizer Based on the Maximum Entropy Formalism

    Directory of Open Access Journals (Sweden)

    Kai Yan

    2015-01-01

    Full Text Available A predictive model for droplet size and velocity distributions of a pressure swirl atomizer has been proposed based on the maximum entropy formalism (MEF. The constraint conditions of the MEF model include the conservation laws of mass, momentum, and energy. The effects of liquid swirling strength, Weber number, gas-to-liquid axial velocity ratio and gas-to-liquid density ratio on the droplet size and velocity distributions of a pressure swirl atomizer are investigated. Results show that model based on maximum entropy formalism works well to predict droplet size and velocity distributions under different spray conditions. Liquid swirling strength, Weber number, gas-to-liquid axial velocity ratio and gas-to-liquid density ratio have different effects on droplet size and velocity distributions of a pressure swirl atomizer.

  15. Set size and culture influence children's attention to number.

    Science.gov (United States)

    Cantrell, Lisa; Kuwabara, Megumi; Smith, Linda B

    2015-03-01

    Much research evidences a system in adults and young children for approximately representing quantity. Here we provide evidence that the bias to attend to discrete quantity versus other dimensions may be mediated by set size and culture. Preschool-age English-speaking children in the United States and Japanese-speaking children in Japan were tested in a match-to-sample task where number was pitted against cumulative surface area in both large and small numerical set comparisons. Results showed that children from both cultures were biased to attend to the number of items for small sets. Large set responses also showed a general attention to number when ratio difficulty was easy. However, relative to the responses for small sets, attention to number decreased for both groups; moreover, both U.S. and Japanese children showed a significant bias to attend to total amount for difficult numerical ratio distances, although Japanese children shifted attention to total area at relatively smaller set sizes than U.S. children. These results add to our growing understanding of how quantity is represented and how such representation is influenced by context--both cultural and perceptual. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Study of the variation of maximum beam size with quadrupole gradient in the FMIT drift tube linac

    International Nuclear Information System (INIS)

    Boicourt, G.P.; Jameson, R.A.

    1981-01-01

    The sensitivity of maximum beam size to input mismatch is studied as a function of quadrupole gradient in a short, high-current, drift-tube linac (DTL), for two presriptions: constant phase advance with constant filling factor; and constant strength with constant-length quads. Numerical study using PARMILA shows that the choice of quadrupole strength that minimizes the maximum transverse size of the matched beam through subsequent cells of the linac tends to be most sensitive to input mismatch. However, gradients exist nearby that result in almost-as-small beams over a suitably broad range of mismatch. The study was used to choose the initial gradient for the DTL portion of the Fusion Material Irradiation Test (FMIT) linac. The matching required across quad groups is also discussed

  17. Understanding the Role of Reservoir Size on Probable Maximum Precipitation

    Science.gov (United States)

    Woldemichael, A. T.; Hossain, F.

    2011-12-01

    This study addresses the question 'Does surface area of an artificial reservoir matter in the estimation of probable maximum precipitation (PMP) for an impounded basin?' The motivation of the study was based on the notion that the stationarity assumption that is implicit in the PMP for dam design can be undermined in the post-dam era due to an enhancement of extreme precipitation patterns by an artificial reservoir. In addition, the study lays the foundation for use of regional atmospheric models as one way to perform life cycle assessment for planned or existing dams to formulate best management practices. The American River Watershed (ARW) with the Folsom dam at the confluence of the American River was selected as the study region and the Dec-Jan 1996-97 storm event was selected for the study period. The numerical atmospheric model used for the study was the Regional Atmospheric Modeling System (RAMS). First, the numerical modeling system, RAMS, was calibrated and validated with selected station and spatially interpolated precipitation data. Best combinations of parameterization schemes in RAMS were accordingly selected. Second, to mimic the standard method of PMP estimation by moisture maximization technique, relative humidity terms in the model were raised to 100% from ground up to the 500mb level. The obtained model-based maximum 72-hr precipitation values were named extreme precipitation (EP) as a distinction from the PMPs obtained by the standard methods. Third, six hypothetical reservoir size scenarios ranging from no-dam (all-dry) to the reservoir submerging half of basin were established to test the influence of reservoir size variation on EP. For the case of the ARW, our study clearly demonstrated that the assumption of stationarity that is implicit the traditional estimation of PMP can be rendered invalid to a large part due to the very presence of the artificial reservoir. Cloud tracking procedures performed on the basin also give indication of the

  18. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  19. Effects of Group Size on Students Mathematics Achievement in Small Group Settings

    Science.gov (United States)

    Enu, Justice; Danso, Paul Amoah; Awortwe, Peter K.

    2015-01-01

    An ideal group size is hard to obtain in small group settings; hence there are groups with more members than others. The purpose of the study was to find out whether group size has any effects on students' mathematics achievement in small group settings. Two third year classes of the 2011/2012 academic year were selected from two schools in the…

  20. Information overload or search-amplified risk? Set size and order effects on decisions from experience.

    Science.gov (United States)

    Hills, Thomas T; Noguchi, Takao; Gibbert, Michael

    2013-10-01

    How do changes in choice-set size influence information search and subsequent decisions? Moreover, does information overload influence information processing with larger choice sets? We investigated these questions by letting people freely explore sets of gambles before choosing one of them, with the choice sets either increasing or decreasing in number for each participant (from two to 32 gambles). Set size influenced information search, with participants taking more samples overall, but sampling a smaller proportion of gambles and taking fewer samples per gamble, when set sizes were larger. The order of choice sets also influenced search, with participants sampling from more gambles and taking more samples overall if they started with smaller as opposed to larger choice sets. Inconsistent with information overload, information processing appeared consistent across set sizes and choice order conditions, reliably favoring gambles with higher sample means. Despite the lack of evidence for information overload, changes in information search did lead to systematic changes in choice: People who started with smaller choice sets were more likely to choose gambles with the highest expected values, but only for small set sizes. For large set sizes, the increase in total samples increased the likelihood of encountering rare events at the same time that the reduction in samples per gamble amplified the effect of these rare events when they occurred-what we call search-amplified risk. This led to riskier choices for individuals whose choices most closely followed the sample mean.

  1. Energetic constraints, size gradients, and size limits in benthic marine invertebrates.

    Science.gov (United States)

    Sebens, Kenneth P

    2002-08-01

    Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.

  2. Intraspecific Variation in Maximum Ingested Food Size and Body Mass in Varecia rubra and Propithecus coquereli

    Directory of Open Access Journals (Sweden)

    Adam Hartstone-Rose

    2011-01-01

    Full Text Available In a recent study, we quantified the scaling of ingested food size (Vb—the maximum size at which an animal consistently ingests food whole—and found that Vb scaled isometrically between species of captive strepsirrhines. The current study examines the relationship between Vb and body size within species with a focus on the frugivorous Varecia rubra and the folivorous Propithecus coquereli. We found no overlap in Vb between the species (all V. rubra ingested larger pieces of food relative to those eaten by P. coquereli, and least-squares regression of Vb and three different measures of body mass showed no scaling relationship within each species. We believe that this lack of relationship results from the relatively narrow intraspecific body size variation and seemingly patternless individual variation in Vb within species and take this study as further evidence that general scaling questions are best examined interspecifically rather than intraspecifically.

  3. Setting the renormalization scale in QCD: The principle of maximum conformality

    DEFF Research Database (Denmark)

    Brodsky, S. J.; Di Giustino, L.

    2012-01-01

    A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale mu of the running coupling alpha(s)(mu(2)). The purpose of the running coupling in any gauge theory is to sum all terms involving the beta function; in fact, when the renormali......A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale mu of the running coupling alpha(s)(mu(2)). The purpose of the running coupling in any gauge theory is to sum all terms involving the beta function; in fact, when...... the renormalization scale is set properly, all nonconformal beta not equal 0 terms in a perturbative expansion arising from renormalization are summed into the running coupling. The remaining terms in the perturbative series are then identical to that of a conformal theory; i.e., the corresponding theory with beta...... = 0. The resulting scale-fixed predictions using the principle of maximum conformality (PMC) are independent of the choice of renormalization scheme-a key requirement of renormalization group invariance. The results avoid renormalon resummation and agree with QED scale setting in the Abelian limit...

  4. Direct maximum parsimony phylogeny reconstruction from genotype data.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-12-05

    Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  5. Direct maximum parsimony phylogeny reconstruction from genotype data

    Directory of Open Access Journals (Sweden)

    Ravi R

    2007-12-01

    Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  6. Investigating sediment size distributions and size-specific Sm-Nd isotopes as paleoceanographic proxy in the North Atlantic Ocean: reconstructing past deep-sea current speeds since Last Glacial Maximum

    OpenAIRE

    Li, Yuting

    2017-01-01

    To explore whether the dispersion of sediments in the North Atlantic can be related to modern and past Atlantic Meridional Overturning Circulation (AMOC) flow speed, particle size distributions (weight%, Sortable Silt mean grain size) and grain-size separated (0–4, 4–10, 10–20, 20–30, 30–40 and 40–63 µm) Sm-Nd isotopes and trace element concentrations are measured on 12 cores along the flow-path of Western Boundary Undercurrent and in the central North Atlantic since the Last glacial Maximum ...

  7. Improving small RNA-seq by using a synthetic spike-in set for size-range quality control together with a set for data normalization.

    Science.gov (United States)

    Locati, Mauro D; Terpstra, Inez; de Leeuw, Wim C; Kuzak, Mateusz; Rauwerda, Han; Ensink, Wim A; van Leeuwen, Selina; Nehrdich, Ulrike; Spaink, Herman P; Jonker, Martijs J; Breit, Timo M; Dekker, Rob J

    2015-08-18

    There is an increasing interest in complementing RNA-seq experiments with small-RNA (sRNA) expression data to obtain a comprehensive view of a transcriptome. Currently, two main experimental challenges concerning sRNA-seq exist: how to check the size distribution of isolated sRNAs, given the sensitive size-selection steps in the protocol; and how to normalize data between samples, given the low complexity of sRNA types. We here present two separate sets of synthetic RNA spike-ins for monitoring size-selection and for performing data normalization in sRNA-seq. The size-range quality control (SRQC) spike-in set, consisting of 11 oligoribonucleotides (10-70 nucleotides), was tested by intentionally altering the size-selection protocol and verified via several comparative experiments. We demonstrate that the SRQC set is useful to reproducibly track down biases in the size-selection in sRNA-seq. The external reference for data-normalization (ERDN) spike-in set, consisting of 19 oligoribonucleotides, was developed for sample-to-sample normalization in differential-expression analysis of sRNA-seq data. Testing and applying the ERDN set showed that it can reproducibly detect differential expression over a dynamic range of 2(18). Hence, biological variation in sRNA composition and content between samples is preserved while technical variation is effectively minimized. Together, both spike-in sets can significantly improve the technical reproducibility of sRNA-seq. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    OpenAIRE

    Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong

    2013-01-01

    In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...

  9. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Science.gov (United States)

    Erbe, Malena; Gredler, Birgit; Seefried, Franz Reinhold; Bapst, Beat; Simianer, Henner

    2013-01-01

    Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]). The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20) cross-validation scenarios (50 replicates, random assignment) were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010), augmented by a weighting factor (w) based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text]) was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  10. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Directory of Open Access Journals (Sweden)

    Malena Erbe

    Full Text Available Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]. The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20 cross-validation scenarios (50 replicates, random assignment were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010, augmented by a weighting factor (w based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text] was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  11. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Science.gov (United States)

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  12. Predecessor queries in dynamic integer sets

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    1997-01-01

    We consider the problem of maintaining a set of n integers in the range 0.2w–1 under the operations of insertion, deletion, predecessor queries, minimum queries and maximum queries on a unit cost RAM with word size w bits. Let f (n) be an arbitrary nondecreasing smooth function satisfying n...

  13. Combining the role of convenience and consideration set size in explaining fish consumption in Norway.

    Science.gov (United States)

    Rortveit, Asbjorn Warvik; Olsen, Svein Ottar

    2009-04-01

    The purpose of this study is to explore how convenience orientation, perceived product inconvenience and consideration set size are related to attitudes towards fish and fish consumption. The authors present a structural equation model (SEM) based on the integration of two previous studies. The results of a SEM analysis using Lisrel 8.72 on data from a Norwegian consumer survey (n=1630) suggest that convenience orientation and perceived product inconvenience have a negative effect on both consideration set size and consumption frequency. Attitude towards fish has the greatest impact on consumption frequency. The results also indicate that perceived product inconvenience is a key variable since it has a significant impact on attitude, and on consideration set size and consumption frequency. Further, the analyses confirm earlier findings suggesting that the effect of convenience orientation on consumption is partially mediated through perceived product inconvenience. The study also confirms earlier findings suggesting that the consideration set size affects consumption frequency. Practical implications drawn from this research are that the seafood industry would benefit from developing and positioning products that change beliefs about fish as an inconvenient product. Future research for other food categories should be done to enhance the external validity.

  14. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Determinants of Awareness, Consideration, and Choice Set Size in University Choice.

    Science.gov (United States)

    Dawes, Philip L.; Brown, Jennifer

    2002-01-01

    Developed and tested a model of students' university "brand" choice using five individual-level variables (ethnic group, age, gender, number of parents going to university, and academic ability) and one situational variable (duration of search) to explain variation in the sizes of awareness, consideration, and choice decision sets. (EV)

  16. Determination of size-specific exposure settings in dental cone-beam CT

    International Nuclear Information System (INIS)

    Pauwels, Ruben; Jacobs, Reinhilde; Bogaerts, Ria; Bosmans, Hilde; Panmekiate, Soontra

    2017-01-01

    To estimate the possible reduction of tube output as a function of head size in dental cone-beam computed tomography (CBCT). A 16 cm PMMA phantom, containing a central and six peripheral columns filled with PMMA, was used to represent an average adult male head. The phantom was scanned using CBCT, with 0-6 peripheral columns having been removed in order to simulate varying head sizes. For five kV settings (70-90 kV), the mAs required to reach a predetermined image noise level was determined, and corresponding radiation doses were derived. Results were expressed as a function of head size, age, and gender, based on growth reference charts. The use of 90 kV consistently resulted in the largest relative dose reduction. A potential mAs reduction ranging from 7 % to 50 % was seen for the different simulated head sizes, showing an exponential relation between head size and mAs. An optimized exposure protocol based on head circumference or age/gender is proposed. A considerable dose reduction, through reduction of the mAs rather than the kV, is possible for small-sized patients in CBCT, including children and females. Size-specific exposure protocols should be clinically implemented. (orig.)

  17. Determination of size-specific exposure settings in dental cone-beam CT

    Energy Technology Data Exchange (ETDEWEB)

    Pauwels, Ruben [Chulalongkorn University, Department of Radiology, Faculty of Dentistry, Patumwan, Bangkok (Thailand); University of Leuven, OMFS-IMPATH Research Group, Department of Imaging and Pathology, Biomedical Sciences Group, Leuven (Belgium); Jacobs, Reinhilde [University of Leuven, OMFS-IMPATH Research Group, Department of Imaging and Pathology, Biomedical Sciences Group, Leuven (Belgium); Bogaerts, Ria [University of Leuven, Laboratory of Experimental Radiotherapy, Department of Oncology, Biomedical Sciences Group, Leuven (Belgium); Bosmans, Hilde [University of Leuven, Medical Physics and Quality Assessment, Department of Imaging and Pathology, Biomedical Sciences Group, Leuven (Belgium); Panmekiate, Soontra [Chulalongkorn University, Department of Radiology, Faculty of Dentistry, Patumwan, Bangkok (Thailand)

    2017-01-15

    To estimate the possible reduction of tube output as a function of head size in dental cone-beam computed tomography (CBCT). A 16 cm PMMA phantom, containing a central and six peripheral columns filled with PMMA, was used to represent an average adult male head. The phantom was scanned using CBCT, with 0-6 peripheral columns having been removed in order to simulate varying head sizes. For five kV settings (70-90 kV), the mAs required to reach a predetermined image noise level was determined, and corresponding radiation doses were derived. Results were expressed as a function of head size, age, and gender, based on growth reference charts. The use of 90 kV consistently resulted in the largest relative dose reduction. A potential mAs reduction ranging from 7 % to 50 % was seen for the different simulated head sizes, showing an exponential relation between head size and mAs. An optimized exposure protocol based on head circumference or age/gender is proposed. A considerable dose reduction, through reduction of the mAs rather than the kV, is possible for small-sized patients in CBCT, including children and females. Size-specific exposure protocols should be clinically implemented. (orig.)

  18. Auditory proactive interference in monkeys: the roles of stimulus set size and intertrial interval.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2013-09-01

    We conducted two experiments to examine the influences of stimulus set size (the number of stimuli that are used throughout the session) and intertrial interval (ITI, the elapsed time between trials) in auditory short-term memory in monkeys. We used an auditory delayed matching-to-sample task wherein the animals had to indicate whether two sounds separated by a 5-s retention interval were the same (match trials) or different (nonmatch trials). In Experiment 1, we randomly assigned stimulus set sizes of 2, 4, 8, 16, 32, 64, or 192 (trial-unique) for each session of 128 trials. Consistent with previous visual studies, overall accuracy was consistently lower when smaller stimulus set sizes were used. Further analyses revealed that these effects were primarily caused by an increase in incorrect "same" responses on nonmatch trials. In Experiment 2, we held the stimulus set size constant at four for each session and alternately set the ITI at 5, 10, or 20 s. Overall accuracy improved when the ITI was increased from 5 to 10 s, but it was the same across the 10- and 20-s conditions. As in Experiment 1, the overall decrease in accuracy during the 5-s condition was caused by a greater number of false "match" responses on nonmatch trials. Taken together, Experiments 1 and 2 showed that auditory short-term memory in monkeys is highly susceptible to proactive interference caused by stimulus repetition. Additional analyses of the data from Experiment 1 suggested that monkeys may make same-different judgments on the basis of a familiarity criterion that is adjusted by error-related feedback.

  19. The reconstruction of choice value in the brain: a look into the size of consideration sets and their affective consequences.

    Science.gov (United States)

    Kim, Hye-Young; Shin, Yeonsoon; Han, Sanghoon

    2014-04-01

    It has been proposed that choice utility exhibits an inverted U-shape as a function of the number of options in the choice set. However, most researchers have so far only focused on the "physically extant" number of options in the set while disregarding the more important psychological factor, the "subjective" number of options worth considering to choose-that is, the size of the consideration set. To explore this previously ignored aspect, we examined how variations in the size of a consideration set can produce different affective consequences after making choices and investigated the underlying neural mechanism using fMRI. After rating their preferences for art posters, participants made a choice from a presented set and then reported on their level of satisfaction with their choice and the level of difficulty experienced in choosing it. Our behavioral results demonstrated that enlarged assortment set can lead to greater choice satisfaction only when increases in both consideration set size and preference contrast are involved. Moreover, choice difficulty is determined based on the size of an individual's consideration set rather than on the size of the assortment set, and it decreases linearly as a function of the level of contrast among alternatives. The neuroimaging analysis of choice-making revealed that subjective consideration set size was encoded in the striatum, the dACC, and the insula. In addition, the striatum also represented variations in choice satisfaction resulting from alterations in the size of consideration sets, whereas a common neural specificity for choice difficulty and consideration set size was shown in the dACC. These results have theoretical and practical importance in that it is one of the first studies investigating the influence of the psychological attributes of choice sets on the value-based decision-making process.

  20. Gravitational wave chirp search: no-signal cumulative distribution of the maximum likelihood detection statistic

    International Nuclear Information System (INIS)

    Croce, R P; Demma, Th; Longo, M; Marano, S; Matta, V; Pierro, V; Pinto, I M

    2003-01-01

    The cumulative distribution of the supremum of a set (bank) of correlators is investigated in the context of maximum likelihood detection of gravitational wave chirps from coalescing binaries with unknown parameters. Accurate (lower-bound) approximants are introduced based on a suitable generalization of previous results by Mohanty. Asymptotic properties (in the limit where the number of correlators goes to infinity) are highlighted. The validity of numerical simulations made on small-size banks is extended to banks of any size, via a Gaussian correlation inequality

  1. Determining an Estimate of an Equivalence Relation for Moderate and Large Sized Sets

    Directory of Open Access Journals (Sweden)

    Leszek Klukowski

    2017-01-01

    Full Text Available This paper presents two approaches to determining estimates of an equivalence relation on the basis of pairwise comparisons with random errors. Obtaining such an estimate requires the solution of a discrete programming problem which minimizes the sum of the differences between the form of the relation and the comparisons. The problem is NP hard and can be solved with the use of exact algorithms for sets of moderate size, i.e. about 50 elements. In the case of larger sets, i.e. at least 200 comparisons for each element, it is necessary to apply heuristic algorithms. The paper presents results (a statistical preprocessing, which enable us to determine the optimal or a near-optimal solution with acceptable computational cost. They include: the development of a statistical procedure producing comparisons with low probabilities of errors and a heuristic algorithm based on such comparisons. The proposed approach guarantees the applicability of such estimators for any size of set. (original abstract

  2. Performance analysis and comparison of an Atkinson cycle coupled to variable temperature heat reservoirs under maximum power and maximum power density conditions

    International Nuclear Information System (INIS)

    Wang, P.-Y.; Hou, S.-S.

    2005-01-01

    In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions

  3. The extended Price equation quantifies species selection on mammalian body size across the Palaeocene/Eocene Thermal Maximum.

    Science.gov (United States)

    Rankin, Brian D; Fox, Jeremy W; Barrón-Ortiz, Christian R; Chew, Amy E; Holroyd, Patricia A; Ludtke, Joshua A; Yang, Xingkai; Theodor, Jessica M

    2015-08-07

    Species selection, covariation of species' traits with their net diversification rates, is an important component of macroevolution. Most studies have relied on indirect evidence for its operation and have not quantified its strength relative to other macroevolutionary forces. We use an extension of the Price equation to quantify the mechanisms of body size macroevolution in mammals from the latest Palaeocene and earliest Eocene of the Bighorn and Clarks Fork Basins of Wyoming. Dwarfing of mammalian taxa across the Palaeocene/Eocene Thermal Maximum (PETM), an intense, brief warming event that occurred at approximately 56 Ma, has been suggested to reflect anagenetic change and the immigration of small bodied-mammals, but might also be attributable to species selection. Using previously reconstructed ancestor-descendant relationships, we partitioned change in mean mammalian body size into three distinct mechanisms: species selection operating on resident mammals, anagenetic change within resident mammalian lineages and change due to immigrants. The remarkable decrease in mean body size across the warming event occurred through anagenetic change and immigration. Species selection also was strong across the PETM but, intriguingly, favoured larger-bodied species, implying some unknown mechanism(s) by which warming events affect macroevolution. © 2015 The Author(s).

  4. Explicit Constructions and Bounds for Batch Codes with Restricted Size of Reconstruction Sets

    OpenAIRE

    Thomas, Eldho K.; Skachek, Vitaly

    2017-01-01

    Linear batch codes and codes for private information retrieval (PIR) with a query size $t$ and a restricted size $r$ of the reconstruction sets are studied. New bounds on the parameters of such codes are derived for small values of $t$ or of $r$ by providing corresponding constructions. By building on the ideas of Cadambe and Mazumdar, a new bound in a recursive form is derived for batch codes and PIR codes.

  5. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  6. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  7. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  8. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    Directory of Open Access Journals (Sweden)

    Ning-Cong Xiao

    2013-12-01

    Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.

  9. Word length, set size, and lexical factors: Re-examining what causes the word length effect.

    Science.gov (United States)

    Guitard, Dominic; Gabel, Andrew J; Saint-Aubin, Jean; Surprenant, Aimée M; Neath, Ian

    2018-04-19

    The word length effect, better recall of lists of short (fewer syllables) than long (more syllables) words has been termed a benchmark effect of working memory. Despite this, experiments on the word length effect can yield quite different results depending on set size and stimulus properties. Seven experiments are reported that address these 2 issues. Experiment 1 replicated the finding of a preserved word length effect under concurrent articulation for large stimulus sets, which contrasts with the abolition of the word length effect by concurrent articulation for small stimulus sets. Experiment 2, however, demonstrated that when the short and long words are equated on more dimensions, concurrent articulation abolishes the word length effect for large stimulus sets. Experiment 3 shows a standard word length effect when output time is equated, but Experiments 4-6 show no word length effect when short and long words are equated on increasingly more dimensions that previous demonstrations have overlooked. Finally, Experiment 7 compared recall of a small and large neighborhood words that were equated on all the dimensions used in Experiment 6 (except for those directly related to neighborhood size) and a neighborhood size effect was still observed. We conclude that lexical factors, rather than word length per se, are better predictors of when the word length effect will occur. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. Measuring fire size in tunnels

    International Nuclear Information System (INIS)

    Guo, Xiaoping; Zhang, Qihui

    2013-01-01

    A new measure of fire size Q′ has been introduced in longitudinally ventilated tunnel as the ratio of flame height to the height of tunnel. The analysis in this article has shown that Q′ controls both the critical velocity and the maximum ceiling temperature in the tunnel. Before the fire flame reaches tunnel ceiling (Q′ 1.0), Fr approaches a constant value. This is also a well-known phenomenon in large tunnel fires. Tunnel ceiling temperature shows the opposite trend. Before the fire flame reaches the ceiling, it increases very slowly with the fire size. Once the flame has hit the ceiling of tunnel, temperature rises rapidly with Q′. The good agreement between the current prediction and three different sets of experimental data has demonstrated that the theory has correctly modelled the relation among the heat release rate of fire, ventilation flow and the height of tunnel. From design point of view, the theoretical maximum of critical velocity for a given tunnel can help to prevent oversized ventilation system. -- Highlights: • Fire sizing is an important safety measure in tunnel design. • New measure of fire size a function of HRR of fire, tunnel height and ventilation. • The measure can identify large and small fires. • The characteristics of different fire are consistent with observation in real fires

  11. Body size distribution of the dinosaurs.

    Directory of Open Access Journals (Sweden)

    Eoin J O'Gorman

    Full Text Available The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size.

  12. Body size distribution of the dinosaurs.

    Science.gov (United States)

    O'Gorman, Eoin J; Hone, David W E

    2012-01-01

    The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size.

  13. Body Size Distribution of the Dinosaurs

    Science.gov (United States)

    O’Gorman, Eoin J.; Hone, David W. E.

    2012-01-01

    The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size. PMID:23284818

  14. Metabolic expenditures of lunge feeding rorquals across scale: implications for the evolution of filter feeding and the limits to maximum body size.

    Directory of Open Access Journals (Sweden)

    Jean Potvin

    Full Text Available Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals, the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae, fin (Balaenoptera physalus, blue (Balaenoptera musculus and minke (Balaenoptera acutorostrata whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum VO2 at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half VO2|max. These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting

  15. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  16. Prey size and availability limits maximum size of rainbow trout in a large tailwater: insights from a drift-foraging bioenergetics model

    Science.gov (United States)

    Dodrill, Michael J.; Yackulic, Charles B.; Kennedy, Theodore A.; Haye, John W

    2016-01-01

    The cold and clear water conditions present below many large dams create ideal conditions for the development of economically important salmonid fisheries. Many of these tailwater fisheries have experienced declines in the abundance and condition of large trout species, yet the causes of these declines remain uncertain. Here, we develop, assess, and apply a drift-foraging bioenergetics model to identify the factors limiting rainbow trout (Oncorhynchus mykiss) growth in a large tailwater. We explored the relative importance of temperature, prey quantity, and prey size by constructing scenarios where these variables, both singly and in combination, were altered. Predicted growth matched empirical mass-at-age estimates, particularly for younger ages, demonstrating that the model accurately describes how current temperature and prey conditions interact to determine rainbow trout growth. Modeling scenarios that artificially inflated prey size and abundance demonstrate that rainbow trout growth is limited by the scarcity of large prey items and overall prey availability. For example, shifting 10% of the prey biomass to the 13 mm (large) length class, without increasing overall prey biomass, increased lifetime maximum mass of rainbow trout by 88%. Additionally, warmer temperatures resulted in lower predicted growth at current and lower levels of prey availability; however, growth was similar across all temperatures at higher levels of prey availability. Climate change will likely alter flow and temperature regimes in large rivers with corresponding changes to invertebrate prey resources used by fish. Broader application of drift-foraging bioenergetics models to build a mechanistic understanding of how changes to habitat conditions and prey resources affect growth of salmonids will benefit management of tailwater fisheries.

  17. 13 CFR 121.412 - What are the size procedures for partial small business set-asides?

    Science.gov (United States)

    2010-01-01

    ... Requirements for Government Procurement § 121.412 What are the size procedures for partial small business set... portion of a procurement, and is not required to qualify as a small business for the unrestricted portion. ...

  18. Oxygen no longer plays a major role in Body Size Evolution

    Science.gov (United States)

    Datta, H.; Sachson, W.; Heim, N. A.; Payne, J.

    2015-12-01

    When observing the long-term relationship between atmospheric oxygen and the maximum size in organisms across the Geozoic (~3.8 Ga - present), it appears that as oxygen increases, organism size grows. However, during the Phanerozoic (541 Ma - Present) oxygen levels varied, so we set out to test the hypothesis that oxygen levels drive patterns marine animal body size evolution. Expected decreases in maximum size due to a lack of oxygen do not occur, and instead, body size continues to increase regardless. In the oxygen data, a relatively low atmospheric oxygen percentage can support increasing body size, so our research tries to determine whether lifestyle affects body size in marine organisms. The genera in the data set were organized based on their tiering, motility, and feeding, such as a pelagic, fully-motile, predator. When organisms fill a certain ecological niche to take advantage of resources, they will have certain life modes, rather than randomly selected traits. For example, even in terrestrial environments, large animals have to constantly feed themselves to support their expensive terrestrial lifestyle which involves fairly consistent movement, and the structural support necessary for that movement. Only organisms with access to high energy food sources or large amounts of food can support themselves, and that is before they expend energy elsewhere. Organisms that expend energy frugally when active or have slower metabolisms in comparison to body size have a more efficient lifestyle and are generally able to grow larger, while those who have higher energy demands like predators are limited to comparatively smaller sizes. Therefore, in respect to the fossil record and modern measurements of animals, the metabolism and lifestyle of an organism dictate its body size in general. With this further clarification on the patterns of evolution, it will be easier to observe and understand the reasons for the ecological traits of organisms today.

  19. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  20. Details Matter: Noise and Model Structure Set the Relationship between Cell Size and Cell Cycle Timing

    Directory of Open Access Journals (Sweden)

    Felix Barber

    2017-11-01

    Full Text Available Organisms across all domains of life regulate the size of their cells. However, the means by which this is done is poorly understood. We study two abstracted “molecular” models for size regulation: inhibitor dilution and initiator accumulation. We apply the models to two settings: bacteria like Escherichia coli, that grow fully before they set a division plane and divide into two equally sized cells, and cells that form a bud early in the cell division cycle, confine new growth to that bud, and divide at the connection between that bud and the mother cell, like the budding yeast Saccharomyces cerevisiae. In budding cells, delaying cell division until buds reach the same size as their mother leads to very weak size control, with average cell size and standard deviation of cell size increasing over time and saturating up to 100-fold higher than those values for cells that divide when the bud is still substantially smaller than its mother. In budding yeast, both inhibitor dilution or initiator accumulation models are consistent with the observation that the daughters of diploid cells add a constant volume before they divide. This “adder” behavior has also been observed in bacteria. We find that in bacteria an inhibitor dilution model produces adder correlations that are not robust to noise in the timing of DNA replication initiation or in the timing from initiation of DNA replication to cell division (the C+D period. In contrast, in bacteria an initiator accumulation model yields robust adder correlations in the regime where noise in the timing of DNA replication initiation is much greater than noise in the C + D period, as reported previously (Ho and Amir, 2015. In bacteria, division into two equally sized cells does not broaden the size distribution.

  1. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    Science.gov (United States)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  2. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    Science.gov (United States)

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

  3. An improved maximum power point tracking method for a photovoltaic system

    Science.gov (United States)

    Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes

    2016-06-01

    In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.

  4. Identification of "ever-cropped" land (1984-2010) using Landsat annual maximum NDVI image composites: Southwestern Kansas case study.

    Science.gov (United States)

    Maxwell, Susan K; Sylvester, Kenneth M

    2012-06-01

    A time series of 230 intra- and inter-annual Landsat Thematic Mapper images was used to identify land that was ever cropped during the years 1984 through 2010 for a five county region in southwestern Kansas. Annual maximum Normalized Difference Vegetation Index (NDVI) image composites (NDVI(ann-max)) were used to evaluate the inter-annual dynamics of cropped and non-cropped land. Three feature images were derived from the 27-year NDVI(ann-max) image time series and used in the classification: 1) maximum NDVI value that occurred over the entire 27 year time span (NDVI(max)), 2) standard deviation of the annual maximum NDVI values for all years (NDVI(sd)), and 3) standard deviation of the annual maximum NDVI values for years 1984-1986 (NDVI(sd84-86)) to improve Conservation Reserve Program land discrimination.Results of the classification were compared to three reference data sets: County-level USDA Census records (1982-2007) and two digital land cover maps (Kansas 2005 and USGS Trends Program maps (1986-2000)). Area of ever-cropped land for the five counties was on average 11.8 % higher than the area estimated from Census records. Overall agreement between the ever-cropped land map and the 2005 Kansas map was 91.9% and 97.2% for the Trends maps. Converting the intra-annual Landsat data set to a single annual maximum NDVI image composite considerably reduced the data set size, eliminated clouds and cloud-shadow affects, yet maintained information important for discriminating cropped land. Our results suggest that Landsat annual maximum NDVI image composites will be useful for characterizing land use and land cover change for many applications.

  5. Generating a taxonomy of spatially cued attention for visual discrimination: Effects of judgment precision and set size on attention

    Science.gov (United States)

    Hetley, Richard; Dosher, Barbara Anne; Lu, Zhong-Lin

    2014-01-01

    Attention precues improve the performance of perceptual tasks in many but not all circumstances. These spatial attention effects may depend upon display set size or workload, and have been variously attributed to external noise filtering, stimulus enhancement, contrast gain, or response gain, or to uncertainty or other decision effects. In this study, we document systematically different effects of spatial attention in low- and high-precision judgments, with and without external noise, and in different set sizes in order to contribute to the development of a taxonomy of spatial attention. An elaborated perceptual template model (ePTM) provides an integrated account of a complex set of effects of spatial attention with just two attention factors: a set-size dependent exclusion or filtering of external noise and a narrowing of the perceptual template to focus on the signal stimulus. These results are related to the previous literature by classifying the judgment precision and presence of external noise masks in those experiments, suggesting a taxonomy of spatially cued attention in discrimination accuracy. PMID:24939234

  6. Generating a taxonomy of spatially cued attention for visual discrimination: effects of judgment precision and set size on attention.

    Science.gov (United States)

    Hetley, Richard; Dosher, Barbara Anne; Lu, Zhong-Lin

    2014-11-01

    Attention precues improve the performance of perceptual tasks in many but not all circumstances. These spatial attention effects may depend upon display set size or workload, and have been variously attributed to external noise filtering, stimulus enhancement, contrast gain, or response gain, or to uncertainty or other decision effects. In this study, we document systematically different effects of spatial attention in low- and high-precision judgments, with and without external noise, and in different set sizes in order to contribute to the development of a taxonomy of spatial attention. An elaborated perceptual template model (ePTM) provides an integrated account of a complex set of effects of spatial attention with just two attention factors: a set-size dependent exclusion or filtering of external noise and a narrowing of the perceptual template to focus on the signal stimulus. These results are related to the previous literature by classifying the judgment precision and presence of external noise masks in those experiments, suggesting a taxonomy of spatially cued attention in discrimination accuracy.

  7. A Full-size High Temperature Superconducting Coil Employed in a Wind Turbine Generator Set-up

    DEFF Research Database (Denmark)

    Song, Xiaowei (Andy); Mijatovic, Nenad; Kellers, Jürgen

    2016-01-01

    A full-size stationary experimental set-up, which is a pole pair segment of a 2 MW high temperature superconducting (HTS) wind turbine generator, has been built and tested under the HTS-GEN project in Denmark. The performance of the HTS coil is crucial to the set-up, and further to the development...... is tested in LN2 first, and then tested in the set-up so that the magnetic environment in a real generator is reflected. The experimental results are reported, followed by a finite element simulation and a discussion on the deviation of the results. The tested and estimated Ic in LN2 are 148 A and 143 A...

  8. Foraging behaviour and prey size spectra of larval herring Clupea harengus

    DEFF Research Database (Denmark)

    Munk, Peter

    1992-01-01

    size groups of larval herring Clupea harengus L. were studied when preying on 6 size groups of copepods. Larval swimming and attack behaviour changed with prey size and were related to the ratio between prey length and larval length. The effective search rate showed a maximum when prey length was about......, that the available biomass of food as a proportion of the predator biomass will not increase. In order to assess the uniformity of relative prey size spectra of herring larvae and their background in larval foraging behaviour, a set of experimental and field investigations has been carried out. In the experiments, 4...... in the biomass spectra of the environment is important to larval growth and survival....

  9. Optimal set of grid size and angular increment for practical dose calculation using the dynamic conformal arc technique: a systematic evaluation of the dosimetric effects in lung stereotactic body radiation therapy

    International Nuclear Information System (INIS)

    Park, Ji-Yeon; Kim, Siyong; Park, Hae-Jin; Lee, Jeong-Woo; Kim, Yeon-Sil; Suh, Tae-Suk

    2014-01-01

    To recommend the optimal plan parameter set of grid size and angular increment for dose calculations in treatment planning for lung stereotactic body radiation therapy (SBRT) using dynamic conformal arc therapy (DCAT) considering both accuracy and computational efficiency. Dose variations with varying grid sizes (2, 3, and 4 mm) and angular increments (2°, 4°, 6°, and 10°) were analyzed in a thorax phantom for 3 spherical target volumes and in 9 patient cases. A 2-mm grid size and 2° angular increment are assumed sufficient to serve as reference values. The dosimetric effect was evaluated using dose–volume histograms, monitor units (MUs), and dose to organs at risk (OARs) for a definite volume corresponding to the dose–volume constraint in lung SBRT. The times required for dose calculations using each parameter set were compared for clinical practicality. Larger grid sizes caused a dose increase to the structures and required higher MUs to achieve the target coverage. The discrete beam arrangements at each angular increment led to over- and under-estimated OARs doses due to the undulating dose distribution. When a 2° angular increment was used in both studies, a 4-mm grid size changed the dose variation by up to 3–4% (50 cGy) for the heart and the spinal cord, while a 3-mm grid size produced a dose difference of <1% (12 cGy) in all tested OARs. When a 3-mm grid size was employed, angular increments of 6° and 10° caused maximum dose variations of 3% (23 cGy) and 10% (61 cGy) in the spinal cord, respectively, while a 4° increment resulted in a dose difference of <1% (8 cGy) in all cases except for that of one patient. The 3-mm grid size and 4° angular increment enabled a 78% savings in computation time without making any critical sacrifices to dose accuracy. A parameter set with a 3-mm grid size and a 4° angular increment is found to be appropriate for predicting patient dose distributions with a dose difference below 1% while reducing the

  10. Maintenance of Velocity and Power With Cluster Sets During High-Volume Back Squats.

    Science.gov (United States)

    Tufano, James J; Conlon, Jenny A; Nimphius, Sophia; Brown, Lee E; Seitz, Laurent B; Williamson, Bryce D; Haff, G Gregory

    2016-10-01

    To compare the effects of a traditional set structure and 2 cluster set structures on force, velocity, and power during back squats in strength-trained men. Twelve men (25.8 ± 5.1 y, 1.74 ± 0.07 m, 79.3 ± 8.2 kg) performed 3 sets of 12 repetitions at 60% of 1-repetition maximum using 3 different set structures: traditional sets (TS), cluster sets of 4 (CS4), and cluster sets of 2 (CS2). When averaged across all repetitions, peak velocity (PV), mean velocity (MV), peak power (PP), and mean power (MP) were greater in CS2 and CS4 than in TS (P < .01), with CS2 also resulting in greater values than CS4 (P < .02). When examining individual sets within each set structure, PV, MV, PP, and MP decreased during the course of TS (effect sizes 0.28-0.99), whereas no decreases were noted during CS2 (effect sizes 0.00-0.13) or CS4 (effect sizes 0.00-0.29). These results demonstrate that CS structures maintain velocity and power, whereas TS structures do not. Furthermore, increasing the frequency of intraset rest intervals in CS structures maximizes this effect and should be used if maximal velocity is to be maintained during training.

  11. Optimization of the size and shape of the set-in nozzle for a PWR reactor pressure vessel

    Energy Technology Data Exchange (ETDEWEB)

    Murtaza, Usman Tariq, E-mail: maniiut@yahoo.com; Javed Hyder, M., E-mail: hyder@pieas.edu.pk

    2015-04-01

    Highlights: • The size and shape of the set-in nozzle of the RPV have been optimized. • The optimized nozzle ensure the reduction of the mass around 198 kg per nozzle. • The mass of the RPV should be minimized for better fracture toughness. - Abstract: The objective of this research work is to optimize the size and shape of the set-in nozzle for a typical reactor pressure vessel (RPV) of a 300 MW pressurized water reactor. The analysis was performed by optimizing the four design variables which control the size and shape of the nozzle. These variables are inner radius of the nozzle, thickness of the nozzle, taper angle at the nozzle-cylinder intersection, and the point where taper of the nozzle starts from. It is concluded that the optimum design of the nozzle is the one that minimizes the two conflicting state variables, i.e., the stress intensity (Tresca yield criterion) and the mass of the RPV.

  12. Quark bag coupling to finite size pions

    International Nuclear Information System (INIS)

    De Kam, J.; Pirner, H.J.

    1982-01-01

    A standard approximation in theories of quark bags coupled to a pion field is to treat the pion as an elementary field ignoring its substructure and finite size. A difficulty associated with these treatments in the lack of stability of the quark bag due to the rapid increase of the pion pressure on the bad as the bag size diminishes. We investigate the effects of the finite size of the qanti q pion on the pion quark bag coupling by means of a simple nonlocal pion quark interaction. With this amendment the pion pressure on the bag vanishes if the bag size goes to zero. No stability problems are encountered in this description. Furthermore, for extended pions, no longer a maximum is set to the bag parameter B. Therefore 'little bag' solutions may be found provided that B is large enough. We also discuss the possibility of a second minimum in the bag energy function. (orig.)

  13. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  14. Study of the droplet size of sprays generated by swirl nozzles dedicated to gasoline direct injection: measurement and application of the maximum entropy formalism; Etude de la granulometrie des sprays produits par des injecteurs a swirl destines a l'injection directe essence: mesures et application du formalisme d'entropie maximum

    Energy Technology Data Exchange (ETDEWEB)

    Boyaval, S.

    2000-06-15

    This PhD presents a study on a series of high pressure swirl atomizers dedicated to Gasoline Direct Injection (GDI). Measurements are performed in stationary and pulsed working conditions. A great aspect of this thesis is the development of an original experimental set-up to correct multiple light scattering that biases the drop size distributions measurements obtained with a laser diffraction technique (Malvern 2600D). This technique allows to perform a study of drop size characteristics near the injector tip. Correction factors on drop size characteristics and on the diffracted intensities are defined from the developed procedure. Another point consists in applying the Maximum Entropy Formalism (MEF) to calculate drop size distributions. Comparisons between experimental distributions corrected with the correction factors and the calculated distributions show good agreement. This work points out that the mean diameter D{sub 43}, which is also the mean of the volume drop size distribution, and the relative volume span factor {delta}{sub v} are important characteristics of volume drop size distributions. The end of the thesis proposes to determine local drop size characteristics from a new development of deconvolution technique for line-of-sight scattering measurements. The first results show reliable behaviours of radial evolution of local characteristics. In GDI application, we notice that the critical point is the opening stage of the injection. This study shows clearly the effects of injection pressure and nozzle internal geometry on the working characteristics of these injectors, in particular, the influence of the pre-spray. This work points out important behaviours that the improvement of GDI principle ought to consider. (author)

  15. Portfolio of automated trading systems: complexity and learning set size issues.

    Science.gov (United States)

    Raudys, Sarunas

    2013-03-01

    In this paper, we consider using profit/loss histories of multiple automated trading systems (ATSs) as N input variables in portfolio management. By means of multivariate statistical analysis and simulation studies, we analyze the influences of sample size (L) and input dimensionality on the accuracy of determining the portfolio weights. We find that degradation in portfolio performance due to inexact estimation of N means and N(N - 1)/2 correlations is proportional to N/L; however, estimation of N variances does not worsen the result. To reduce unhelpful sample size/dimensionality effects, we perform a clustering of N time series and split them into a small number of blocks. Each block is composed of mutually correlated ATSs. It generates an expert trading agent based on a nontrainable 1/N portfolio rule. To increase the diversity of the expert agents, we use training sets of different lengths for clustering. In the output of the portfolio management system, the regularized mean-variance framework-based fusion agent is developed in each walk-forward step of an out-of-sample portfolio validation experiment. Experiments with the real financial data (2003-2012) confirm the effectiveness of the suggested approach.

  16. The power and robustness of maximum LOD score statistics.

    Science.gov (United States)

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  17. Set size influences the relationship between ANS acuity and math performance: a result of different strategies?

    Science.gov (United States)

    Dietrich, Julia Felicitas; Nuerk, Hans-Christoph; Klein, Elise; Moeller, Korbinian; Huber, Stefan

    2017-08-29

    Previous research has proposed that the approximate number system (ANS) constitutes a building block for later mathematical abilities. Therefore, numerous studies investigated the relationship between ANS acuity and mathematical performance, but results are inconsistent. Properties of the experimental design have been discussed as a potential explanation of these inconsistencies. In the present study, we investigated the influence of set size and presentation duration on the association between non-symbolic magnitude comparison and math performance. Moreover, we focused on strategies reported as an explanation for these inconsistencies. In particular, we employed a non-symbolic magnitude comparison task and asked participants how they solved the task. We observed that set size was a significant moderator of the relationship between non-symbolic magnitude comparison and math performance, whereas presentation duration of the stimuli did not moderate this relationship. This supports the notion that specific design characteristics contribute to the inconsistent results. Moreover, participants reported different strategies including numerosity-based, visual, counting, calculation-based, and subitizing strategies. Frequencies of these strategies differed between different set sizes and presentation durations. However, we found no specific strategy, which alone predicted arithmetic performance, but when considering the frequency of all reported strategies, arithmetic performance could be predicted. Visual strategies made the largest contribution to this prediction. To conclude, the present findings suggest that different design characteristics contribute to the inconsistent findings regarding the relationship between non-symbolic magnitude comparison and mathematical performance by inducing different strategies and additional processes.

  18. Field size and dose distribution of electron beam

    International Nuclear Information System (INIS)

    Kang, Wee Saing

    1980-01-01

    The author concerns some relations between the field size and dose distribution of electron beams. The doses of electron beams are measured by either an ion chamber with an electrometer or by film for dosimetry. We analyzes qualitatively some relations; the energy of incident electron beams and depths of maximum dose, field sizes of electron beams and depth of maximum dose, field size and scatter factor, electron energy and scatter factor, collimator shape and scatter factor, electron energy and surface dose, field size and surface dose, field size and central axis depth dose, and field size and practical range. He meets with some results. They are that the field size of electron beam has influence on the depth of maximum dose, scatter factor, surface dose and central axis depth dose, scatter factor depends on the field size and energy of electron beam, and the shape of the collimator, and the depth of maximum dose and the surface dose depend on the energy of electron beam, but the practical range of electron beam is independent of field size

  19. Morphing Wing Weight Predictors and Their Application in a Template-Based Morphing Aircraft Sizing Environment II. Part 2; Morphing Aircraft Sizing via Multi-level Optimization

    Science.gov (United States)

    Skillen, Michael D.; Crossley, William A.

    2008-01-01

    This report presents an approach for sizing of a morphing aircraft based upon a multi-level design optimization approach. For this effort, a morphing wing is one whose planform can make significant shape changes in flight - increasing wing area by 50% or more from the lowest possible area, changing sweep 30 or more, and/or increasing aspect ratio by as much as 200% from the lowest possible value. The top-level optimization problem seeks to minimize the gross weight of the aircraft by determining a set of "baseline" variables - these are common aircraft sizing variables, along with a set of "morphing limit" variables - these describe the maximum shape change for a particular morphing strategy. The sub-level optimization problems represent each segment in the morphing aircraft's design mission; here, each sub-level optimizer minimizes fuel consumed during each mission segment by changing the wing planform within the bounds set by the baseline and morphing limit variables from the top-level problem.

  20. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  1. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  2. The prevalence of terraced treescapes in analyses of phylogenetic data sets.

    Science.gov (United States)

    Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J

    2018-04-04

    The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.

  3. The influence of spatial grain size on the suitability of the higher-taxon approach in continental priority-setting

    DEFF Research Database (Denmark)

    Larsen, Frank Wugt; Rahbek, Carsten

    2005-01-01

    The higher-taxon approach may provide a pragmatic surrogate for the rapid identification of priority areas for conservation. To date, no continent-wide study has examined the use of higher-taxon data to identify complementarity-based networks of priority areas, nor has the influence of spatial gr...... grain size been assessed. We used data obtained from 939 sub-Saharan mammals to analyse the performance of higher-taxon data for continental priority-setting and to assess the influence of spatial grain sizes in terms of the size of selection units (1°× 1°, 2°× 2° and 4°× 4° latitudinal...... as effectively as species-based priority areas, genus-based areas perform considerably less effectively than species-based areas for the 1° and 2° grain size. Thus, our results favour the higher-taxon approach for continental priority-setting only when large grain sizes (= 4°) are used.......The higher-taxon approach may provide a pragmatic surrogate for the rapid identification of priority areas for conservation. To date, no continent-wide study has examined the use of higher-taxon data to identify complementarity-based networks of priority areas, nor has the influence of spatial...

  4. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  5. Maximum-confidence discrimination among symmetric qudit states

    International Nuclear Information System (INIS)

    Jimenez, O.; Solis-Prosser, M. A.; Delgado, A.; Neves, L.

    2011-01-01

    We study the maximum-confidence (MC) measurement strategy for discriminating among nonorthogonal symmetric qudit states. Restricting to linearly dependent and equally likely pure states, we find the optimal positive operator valued measure (POVM) that maximizes our confidence in identifying each state in the set and minimizes the probability of obtaining inconclusive results. The physical realization of this POVM is completely determined and it is shown that after an inconclusive outcome, the input states may be mapped into a new set of equiprobable symmetric states, restricted, however, to a subspace of the original qudit Hilbert space. By applying the MC measurement again onto this new set, we can still gain some information about the input states, although with less confidence than before. This leads us to introduce the concept of sequential maximum-confidence (SMC) measurements, where the optimized MC strategy is iterated in as many stages as allowed by the input set, until no further information can be extracted from an inconclusive result. Within each stage of this measurement our confidence in identifying the input states is the highest possible, although it decreases from one stage to the next. In addition, the more stages we accomplish within the maximum allowed, the higher will be the probability of correct identification. We will discuss an explicit example of the optimal SMC measurement applied in the discrimination among four symmetric qutrit states and propose an optical network to implement it.

  6. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  7. Impact of different pack sizes of paracetamol in the United Kingdom and Ireland on intentional overdoses: a comparative study

    LENUS (Irish Health Repository)

    Hawton, Keith

    2011-06-10

    Abstract Background In order to reduce fatal self-poisoning legislation was introduced in the UK in 1998 to restrict pack sizes of paracetamol sold in pharmacies (maximum 32 tablets) and non-pharmacy outlets (maximum 16 tablets), and in Ireland in 2001, but with smaller maximum pack sizes (24 and 12 tablets). Our aim was to determine whether this resulted in smaller overdoses of paracetamol in Ireland compared with the UK. Methods We used data on general hospital presentations for non-fatal self-harm for 2002 - 2007 from the Multicentre Study of Self-harm in England (six hospitals), and from the National Registry of Deliberate Self-harm in Ireland. We compared sizes of overdoses of paracetamol in the two settings. Results There were clear peaks in numbers of non-fatal overdoses, associated with maximum pack sizes of paracetamol in pharmacy and non-pharmacy outlets in both England and Ireland. Significantly more pack equivalents (based on maximum non-pharmacy pack sizes) were used in overdoses in Ireland (mean 2.63, 95% CI 2.57-2.69) compared with England (2.07, 95% CI 2.03-2.10). The overall size of overdoses did not differ significantly between England (median 22, interquartile range (IQR) 15-32) and Ireland (median 24, IQR 12-36). Conclusions The difference in paracetamol pack size legislation between England and Ireland does not appear to have resulted in a major difference in sizes of overdoses. This is because more pack equivalents are taken in overdoses in Ireland, possibly reflecting differing enforcement of sales advice. Differences in access to clinical services may also be relevant.

  8. Encoding Strategy for Maximum Noise Tolerance Bidirectional Associative Memory

    National Research Council Canada - National Science Library

    Shen, Dan

    2003-01-01

    In this paper, the Basic Bidirectional Associative Memory (BAM) is extended by choosing weights in the correlation matrix, for a given set of training pairs, which result in a maximum noise tolerance set for BAM...

  9. Generalization of some hidden subgroup algorithms for input sets of arbitrary size

    Science.gov (United States)

    Poslu, Damla; Say, A. C. Cem

    2006-05-01

    We consider the problem of generalizing some quantum algorithms so that they will work on input domains whose cardinalities are not necessarily powers of two. When analyzing the algorithms we assume that generating superpositions of arbitrary subsets of basis states whose cardinalities are not necessarily powers of two perfectly is possible. We have taken Ballhysa's model as a template and have extended it to Chi, Kim and Lee's generalizations of the Deutsch-Jozsa algorithm and to Simon's algorithm. With perfectly equal superpositions of input sets of arbitrary size, Chi, Kim and Lee's generalized Deutsch-Jozsa algorithms, both for evenly-distributed and evenly-balanced functions, worked with one-sided error property. For Simon's algorithm the success probability of the generalized algorithm is the same as that of the original for input sets of arbitrary cardinalities with equiprobable superpositions, since the property that the measured strings are all those which have dot product zero with the string we search, for the case where the function is 2-to-1, is not lost.

  10. Hydraulic limits on maximum plant transpiration and the emergence of the safety-efficiency trade-off.

    Science.gov (United States)

    Manzoni, Stefano; Vico, Giulia; Katul, Gabriel; Palmroth, Sari; Jackson, Robert B; Porporato, Amilcare

    2013-04-01

    Soil and plant hydraulics constrain ecosystem productivity by setting physical limits to water transport and hence carbon uptake by leaves. While more negative xylem water potentials provide a larger driving force for water transport, they also cause cavitation that limits hydraulic conductivity. An optimum balance between driving force and cavitation occurs at intermediate water potentials, thus defining the maximum transpiration rate the xylem can sustain (denoted as E(max)). The presence of this maximum raises the question as to whether plants regulate transpiration through stomata to function near E(max). To address this question, we calculated E(max) across plant functional types and climates using a hydraulic model and a global database of plant hydraulic traits. The predicted E(max) compared well with measured peak transpiration across plant sizes and growth conditions (R = 0.86, P efficiency trade-off in plant xylem. Stomatal conductance allows maximum transpiration rates despite partial cavitation in the xylem thereby suggesting coordination between stomatal regulation and xylem hydraulic characteristics. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  11. Experimental investigation on the influence of instrument settings on pixel size and nonlinearity in SEM image formation

    DEFF Research Database (Denmark)

    Carli, Lorenzo; Genta, Gianfranco; Cantatore, Angela

    2010-01-01

    The work deals with an experimental investigation on the influence of three Scanning Electron Microscope (SEM) instrument settings, accelerating voltage, spot size and magnification, on the image formation process. Pixel size and nonlinearity were chosen as output parameters related to image...... quality and resolution. A silicon grating calibrated artifact was employed to investigate qualitatively and quantitatively, through a designed experiment approach, the parameters relevance. SEM magnification was found to account by far for the largest contribution on both parameters under consideration...

  12. Paving the road to maximum productivity.

    Science.gov (United States)

    Holland, C

    1998-01-01

    "Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.

  13. Maximum Recoverable Gas from Hydrate Bearing Sediments by Depressurization

    KAUST Repository

    Terzariol, Marco

    2017-11-13

    The estimation of gas production rates from hydrate bearing sediments requires complex numerical simulations. This manuscript presents a set of simple and robust analytical solutions to estimate the maximum depressurization-driven recoverable gas. These limiting-equilibrium solutions are established when the dissociation front reaches steady state conditions and ceases to expand further. Analytical solutions show the relevance of (1) relative permeabilities between the hydrate free sediment, the hydrate bearing sediment, and the aquitard layers, and (2) the extent of depressurization in terms of the fluid pressures at the well, at the phase boundary, and in the far field. Close form solutions for the size of the produced zone allow for expeditious financial analyses; results highlight the need for innovative production strategies in order to make hydrate accumulations an economically-viable energy resource. Horizontal directional drilling and multi-wellpoint seafloor dewatering installations may lead to advantageous production strategies in shallow seafloor reservoirs.

  14. Einstein-Dirac theory in spin maximum I

    International Nuclear Information System (INIS)

    Crumeyrolle, A.

    1975-01-01

    An unitary Einstein-Dirac theory, first in spin maximum 1, is constructed. An original feature of this article is that it is written without any tetrapod technics; basic notions and existence conditions for spinor structures on pseudo-Riemannian fibre bundles are only used. A coupling gravitation-electromagnetic field is pointed out, in the geometric setting of the tangent bundle over space-time. Generalized Maxwell equations for inductive media in presence of gravitational field are obtained. Enlarged Einstein-Schroedinger theory, gives a particular case of this E.D. theory. E. S. theory is a truncated E.D. theory in spin maximum 1. A close relation between torsion-vector and Schroedinger's potential exists and nullity of torsion-vector has a spinor meaning. Finally the Petiau-Duffin-Kemmer theory is incorporated in this geometric setting [fr

  15. Optimum sample length for estimating anchovy size distribution and the proportion of juveniles per fishing set for the Peruvian purse-seine fleet

    Directory of Open Access Journals (Sweden)

    Rocío Joo

    2017-04-01

    Full Text Available The length distribution of catches represents a fundamental source of information for estimating growth and spatio-temporal dynamics of cohorts. The length distribution of caught is estimated based on samples of catched individuals. This work studies the optimum sample size of individuals at each fishing set in order to obtain a representative sample of the length and the proportion of juveniles in the fishing set. For that matter, we use anchovy (Engraulis ringens length data from different fishing sets recorded by observers at-sea from the On-board Observers Program from the Peruvian Marine Research Institute. Finally, we propose an optimum sample size for obtaining robust size and juvenile estimations. Though the application of this work corresponds to the anchovy fishery, the procedure can be applied to any fishery, either for on board or inland biometric measurements.

  16. 7 CFR 51.344 - Size.

    Science.gov (United States)

    2010-01-01

    ... the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... Standards for Grades of Apples for Processing Size § 51.344 Size. (a) The minimum and maximum sizes or range...

  17. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  18. Mixed integer linear programming for maximum-parsimony phylogeny inference.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2008-01-01

    Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.

  19. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan B [Los Alamos National Laboratory; Larsen, Edward W [Los Alamos National Laboratory; Densmore, Jeffery D [Los Alamos National Laboratory

    2010-12-15

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle.' Previous attempts at prescribing a maximum value of the time-step size {Delta}{sub t} that is sufficient to eliminate these violations have recommended a {Delta}{sub t} that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size {Delta}{sub x}. This explicitly demonstrates that the effect of coarsening {Delta}{sub x} is to reduce the limitation on {Delta}{sub t}, which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent timestep restriction can impact IMC solution algorithms.

  20. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    International Nuclear Information System (INIS)

    Wollaber, Allan B.; Larsen, Edward W.; Densmore, Jeffery D.

    2011-01-01

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle'. Previous attempts at prescribing a maximum value of the time-step size Δ t that is sufficient to eliminate these violations have recommended a Δ t that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size Δ x . This explicitly demonstrates that the effect of coarsening Δ x is to reduce the limitation on Δ t , which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent time-step restriction can impact IMC solution algorithms. (author)

  1. Optimal placement and sizing of wind / solar based DG sources in distribution system

    Science.gov (United States)

    Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng

    2017-06-01

    Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.

  2. Maximum power analysis of photovoltaic module in Ramadi city

    Energy Technology Data Exchange (ETDEWEB)

    Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)

    2013-07-01

    Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.

  3. Tsallis distribution as a standard maximum entropy solution with 'tail' constraint

    International Nuclear Information System (INIS)

    Bercher, J.-F.

    2008-01-01

    We show that Tsallis' distributions can be derived from the standard (Shannon) maximum entropy setting, by incorporating a constraint on the divergence between the distribution and another distribution imagined as its tail. In this setting, we find an underlying entropy which is the Renyi entropy. Furthermore, escort distributions and generalized means appear as a direct consequence of the construction. Finally, the 'maximum entropy tail distribution' is identified as a Generalized Pareto Distribution

  4. 19 mm sized bileaflet valve prostheses' flow field investigated by bidimensional laser Doppler anemometry (part II: maximum turbulent shear stresses)

    Science.gov (United States)

    Barbaro, V; Grigioni, M; Daniele, C; D'Avenio, G; Boccanera, G

    1997-11-01

    The investigation of the flow field generated by cardiac valve prostheses is a necessary task to gain knowledge on the possible relationship between turbulence-derived stresses and the hemolytic and thrombogenic complications in patients after valve replacement. The study of turbulence flows downstream of cardiac prostheses, in literature, especially concerns large-sized prostheses with a variable flow regime from very low up to 6 L/min. The Food and Drug Administration draft guidance requires the study of the minimum prosthetic size at a high cardiac output to reach the maximum Reynolds number conditions. Within the framework of a national research project regarding the characterization of cardiovascular endoprostheses, an in-depth study of turbulence generated downstream of bileaflet cardiac valves is currently under way at the Laboratory of Biomedical Engineering of the Istituto Superiore di Sanita. Four models of 19 mm bileaflet valve prostheses were used: St Jude Medical HP, Edwards Tekna, Sorin Bicarbon, and CarboMedics. The prostheses were selected for the nominal Tissue Annulus Diameter as reported by manufacturers without any assessment of valve sizing method, and were mounted in aortic position. The aortic geometry was scaled for 19 mm prostheses using angiographic data. The turbulence-derived shear stresses were investigated very close to the valve (0.35 D0), using a bidimensional Laser Doppler anemometry system and applying the Principal Stress Analysis. Results concern typical turbulence quantities during a 50 ms window at peak flow in the systolic phase. Conclusions are drawn regarding the turbulence associated to valve design features, as well as the possible damage to blood constituents.

  5. Maximum physical capacity testing in cancer patients undergoing chemotherapy

    DEFF Research Database (Denmark)

    Knutsen, L.; Quist, M; Midtgaard, J

    2006-01-01

    BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...

  6. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  7. Effects of drop sets with resistance training on increases in muscle CSA, strength, and endurance: a pilot study.

    Science.gov (United States)

    Ozaki, Hayao; Kubota, Atsushi; Natsume, Toshiharu; Loenneke, Jeremy P; Abe, Takashi; Machida, Shuichi; Naito, Hisashi

    2018-03-01

    To investigate the effects of a single high-load (80% of one repetition maximum [1RM]) set with additional drop sets descending to a low-load (30% 1RM) without recovery intervals on muscle strength, endurance, and size in untrained young men. Nine untrained young men performed dumbbell curls to concentric failure 2-3 days per week for 8 weeks. Each arm was randomly assigned to one of the following three conditions: 3 sets of high-load (HL, 80% 1RM) resistance exercise, 3 sets of low-load [LL, 30% 1RM] resistance exercise, and a single high-load (SDS) set with additional drop sets descending to a low-load. The mean training time per session, including recovery intervals, was lowest in the SDS condition. Elbow flexor muscle cross-sectional area (CSA) increased similarly in all three conditions. Maximum isometric and 1RM strength of the elbow flexors increased from pre to post only in the HL and SDS conditions. Muscular endurance measured by maximum repetitions at 30% 1RM increased only in the LL and SDS conditions. A SDS resistance training program can simultaneously increase muscle CSA, strength, and endurance in untrained young men, even with lower training time compared to typical resistance exercise protocols using only high- or low-loads.

  8. Effect of Crusher Type and Crusher Discharge Setting On Washability Characteristics of Coal

    Science.gov (United States)

    Ahila, P.; Battacharya, S.

    2018-02-01

    Natural resources have been serving the life of many civilizations, among these coals are of prime importance. Coal is the most important and abundant fossil fuel in India. It accounts for 55% of the country’s energy need. Coal will continue as the mainstay fuel for power generation. Previous researches has been made about the coal feed size and coal type had great influence on the crushing performance of the same jaw crusher and amount of fines generated from a particular coal depends not only upon coal friability but also on crusher type. Therefore, it necessitates crushing and grinding the coal for downstream process. In this paper the effect of crusher type and crusher discharge setting on washability characteristics of same crushed non-coking coal has been studied. Thus four different crushers were investigated at variable parameters like discharge settings, different capacities and feed openings. The experimental work conducted for all crushers with same feed size and HGI (Hardgrove Grindability Index). Based on the investigation the results indicate that the four crushers which has been involved for the experimental work shows that the variation in not only the product size distribution and also reduction ratio. Maximum breakage has been occurred at coarsest size fraction of irrespective of crusher type and discharge setting.

  9. Influence of prey dispersion on territory and group size of African lions: a test of the resource dispersion hypothesis.

    Science.gov (United States)

    Valeix, Marion; Loveridge, Andrew J; MacDonald, David W

    2012-11-01

    Empirical tests of the resource dispersion hypothesis (RDH), a theory to explain group living based on resource heterogeneity, have been complicated by the fact that resource patch dispersion and richness have proved difficult to define and measure in natural systems. Here, we studied the ecology of African lions Panthera leo in Hwange National Park, Zimbabwe, where waterholes are prey hotspots, and where dispersion of water sources and abundance of prey at these water sources are quantifiable. We combined a 10-year data set from GPS-collared lions for which information of group composition was available concurrently with data for herbivore abundance at waterholes. The distance between two neighboring waterholes was a strong determinant of lion home range size, which provides strong support for the RDH prediction that territory size increases as resource patches are more dispersed in the landscape. The mean number of herbivore herds using a waterhole, a good proxy of patch richness, determined the maximum lion group biomass an area can support. This finding suggests that patch richness sets a maximum ceiling on lion group size. This study demonstrates that landscape ecology is a major driver of ranging behavior and suggests that aspects of resource dispersion limit group sizes.

  10. Sample size calculation in metabolic phenotyping studies.

    Science.gov (United States)

    Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J

    2015-09-01

    The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  11. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  12. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  13. Hierarchical sets: analyzing pangenome structure through scalable set visualizations

    Science.gov (United States)

    2017-01-01

    Abstract Motivation: The increase in available microbial genome sequences has resulted in an increase in the size of the pangenomes being analyzed. Current pangenome visualizations are not intended for the pangenome sizes possible today and new approaches are necessary in order to convert the increase in available information to increase in knowledge. As the pangenome data structure is essentially a collection of sets we explore the potential for scalable set visualization as a tool for pangenome analysis. Results: We present a new hierarchical clustering algorithm based on set arithmetics that optimizes the intersection sizes along the branches. The intersection and union sizes along the hierarchy are visualized using a composite dendrogram and icicle plot, which, in pangenome context, shows the evolution of pangenome and core size along the evolutionary hierarchy. Outlying elements, i.e. elements whose presence pattern do not correspond with the hierarchy, can be visualized using hierarchical edge bundles. When applied to pangenome data this plot shows putative horizontal gene transfers between the genomes and can highlight relationships between genomes that is not represented by the hierarchy. We illustrate the utility of hierarchical sets by applying it to a pangenome based on 113 Escherichia and Shigella genomes and find it provides a powerful addition to pangenome analysis. Availability and Implementation: The described clustering algorithm and visualizations are implemented in the hierarchicalSets R package available from CRAN (https://cran.r-project.org/web/packages/hierarchicalSets) Contact: thomasp85@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28130242

  14. Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks

    Science.gov (United States)

    Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.

    Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.

  15. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    OpenAIRE

    Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...

  16. The reference frame for encoding and retention of motion depends on stimulus set size.

    Science.gov (United States)

    Huynh, Duong; Tripathy, Srimant P; Bedell, Harold E; Öğmen, Haluk

    2017-04-01

    The goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information.

  17. Maximum leaf conductance driven by CO2 effects on stomatal size and density over geologic time.

    Science.gov (United States)

    Franks, Peter J; Beerling, David J

    2009-06-23

    Stomatal pores are microscopic structures on the epidermis of leaves formed by 2 specialized guard cells that control the exchange of water vapor and CO(2) between plants and the atmosphere. Stomatal size (S) and density (D) determine maximum leaf diffusive (stomatal) conductance of CO(2) (g(c(max))) to sites of assimilation. Although large variations in D observed in the fossil record have been correlated with atmospheric CO(2), the crucial significance of similarly large variations in S has been overlooked. Here, we use physical diffusion theory to explain why large changes in S necessarily accompanied the changes in D and atmospheric CO(2) over the last 400 million years. In particular, we show that high densities of small stomata are the only way to attain the highest g(cmax) values required to counter CO(2)"starvation" at low atmospheric CO(2) concentrations. This explains cycles of increasing D and decreasing S evident in the fossil history of stomata under the CO(2) impoverished atmospheres of the Permo-Carboniferous and Cenozoic glaciations. The pattern was reversed under rising atmospheric CO(2) regimes. Selection for small S was crucial for attaining high g(cmax) under falling atmospheric CO(2) and, therefore, may represent a mechanism linking CO(2) and the increasing gas-exchange capacity of land plants over geologic time.

  18. Size-based predictions of food web patterns

    DEFF Research Database (Denmark)

    Zhang, Lai; Hartvig, Martin; Knudsen, Kim

    2014-01-01

    We employ size-based theoretical arguments to derive simple analytic predictions of ecological patterns and properties of natural communities: size-spectrum exponent, maximum trophic level, and susceptibility to invasive species. The predictions are brought about by assuming that an infinite number...... of species are continuously distributed on a size-trait axis. It is, however, an open question whether such predictions are valid for a food web with a finite number of species embedded in a network structure. We address this question by comparing the size-based predictions to results from dynamic food web...... simulations with varying species richness. To this end, we develop a new size- and trait-based food web model that can be simplified into an analytically solvable size-based model. We confirm existing solutions for the size distribution and derive novel predictions for maximum trophic level and invasion...

  19. Cell size, genome size and the dominance of Angiosperms

    Science.gov (United States)

    Simonin, K. A.; Roddy, A. B.

    2016-12-01

    Angiosperms are capable of maintaining the highest rates of photosynthetic gas exchange of all land plants. High rates of photosynthesis depends mechanistically both on efficiently transporting water to the sites of evaporation in the leaf and on regulating the loss of that water to the atmosphere as CO2 diffuses into the leaf. Angiosperm leaves are unique in their ability to sustain high fluxes of liquid and vapor phase water transport due to high vein densities and numerous, small stomata. Despite the ubiquity of studies characterizing the anatomical and physiological adaptations that enable angiosperms to maintain high rates of photosynthesis, the underlying mechanism explaining why they have been able to develop such high leaf vein densities, and such small and abundant stomata, is still incomplete. Here we ask whether the scaling of genome size and cell size places a fundamental constraint on the photosynthetic metabolism of land plants, and whether genome downsizing among the angiosperms directly contributed to their greater potential and realized primary productivity relative to the other major groups of terrestrial plants. Using previously published data we show that a single relationship can predict guard cell size from genome size across the major groups of terrestrial land plants (e.g. angiosperms, conifers, cycads and ferns). Similarly, a strong positive correlation exists between genome size and both stomatal density and vein density that together ultimately constrains maximum potential (gs, max) and operational stomatal conductance (gs, op). Further the difference in the slopes describing the covariation between genome size and both gs, max and gs, op suggests that genome downsizing brings gs, op closer to gs, max. Taken together the data presented here suggests that the smaller genomes of angiosperms allow their final cell sizes to vary more widely and respond more directly to environmental conditions and in doing so bring operational photosynthetic

  20. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Science.gov (United States)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  1. Sample size determination for equivalence assessment with multiple endpoints.

    Science.gov (United States)

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  2. Hit size effectiveness in relation to the microdosimetric site size

    International Nuclear Information System (INIS)

    Varma, M.N.; Wuu, C.S.; Zaider, M.

    1994-01-01

    This paper examines the effect of site size (that is, the diameter of the microdosimetric volume) on the hit size effectiveness function (HSEF), q(y), for several endpoints relevant in radiation protection. A Bayesian and maximum entropy approach is used to solve the integral equations that determine, given microdosimetric spectra and measured initial slopes, the function q(y). All microdosimetric spectra have been calculated de novo. The somewhat surprising conclusion of this analysis is that site size plays only a minor role in selecting the hit size effectiveness function q(y). It thus appears that practical means (e.g. conventional proportional counters) are already at hand to actually implement the HSEF as a radiation protection tool. (Author)

  3. Impact of dissolution on the sedimentary record of the Paleocene-Eocene thermal maximum

    Science.gov (United States)

    Bralower, Timothy J.; Kelly, D. Clay; Gibbs, Samantha; Farley, Kenneth; Eccles, Laurie; Lindemann, T. Logan; Smith, Gregory J.

    2014-09-01

    The input of massive amounts of carbon to the atmosphere and ocean at the Paleocene-Eocene Thermal Maximum (PETM; ˜55.53 Ma) resulted in pervasive carbonate dissolution at the seafloor. At many sites this dissolution also penetrated into the underlying sediment column. The magnitude of dissolution at and below the seafloor, a process known as chemical erosion, and its effect on the stratigraphy of the PETM, are notoriously difficult to constrain. Here, we illuminate the impact of dissolution by analyzing the complete spectrum of sedimentological grain sizes across the PETM at three deep-sea sites characterized by a range of bottom water dissolution intensity. We show that the grain size spectrum provides a measure of the sediment fraction lost during dissolution. We compare these data with dissolution and other proxy records, electron micrograph observations of samples and lithology. The complete data set indicates that the two sites with slower carbonate accumulation, and less active bioturbation, are characterized by significant chemical erosion. At the third site, higher carbonate accumulation rates, more active bioturbation, and possibly winnowing have limited the impacts of dissolution. However, grain size data suggest that bioturbation and winnowing were not sufficiently intense to diminish the fidelity of isotopic and microfossil assemblage records.

  4. Formal comment on: Myhrvold (2016) Dinosaur metabolism and the allometry of maximum growth rate. PLoS ONE; 11(11): e0163205.

    Science.gov (United States)

    Griebeler, Eva Maria; Werner, Jan

    2018-01-01

    In his 2016 paper, Myhrvold criticized ours from 2014 on maximum growth rates (Gmax, maximum gain in body mass observed within a time unit throughout an individual's ontogeny) and thermoregulation strategies (ectothermy, endothermy) of 17 dinosaurs. In our paper, we showed that Gmax values of similar-sized extant ectothermic and endothermic vertebrates overlap. This strongly questions a correct assignment of a thermoregulation strategy to a dinosaur only based on its Gmax and (adult) body mass (M). Contrary, Gmax separated similar-sized extant reptiles and birds (Sauropsida) and Gmax values of our studied dinosaurs were similar to those seen in extant similar-sized (if necessary scaled-up) fast growing ectothermic reptiles. Myhrvold examined two hypotheses (H1 and H2) regarding our study. However, we did neither infer dinosaurian thermoregulation strategies from group-wide averages (H1) nor were our results based on that Gmax and metabolic rate (MR) are related (H2). In order to assess whether single dinosaurian Gmax values fit to those of extant endotherms (birds) or of ectotherms (reptiles), we already used a method suggested by Myhrvold to avoid H1, and we only discussed pros and cons of a relation between Gmax and MR and did not apply it (H2). We appreciate Myhrvold's efforts in eliminating the correlation between Gmax and M in order to statistically improve vertebrate scaling regressions on maximum gain in body mass. However, we show here that his mass-specific maximum growth rate (kC) replacing Gmax (= MkC) does not model the expected higher mass gain in larger than in smaller species for any set of species. We also comment on, why we considered extant reptiles and birds as reference models for extinct dinosaurs and why we used phylogenetically-informed regression analysis throughout our study. Finally, we question several arguments given in Myhrvold in order to support his results.

  5. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    Science.gov (United States)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  6. The influence of negative training set size on machine learning-based virtual screening.

    Science.gov (United States)

    Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J

    2014-01-01

    The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.

  7. Determining the Variability of Lesion Size Measurements from CT Patient Data Sets Acquired under “No Change” Conditions

    Directory of Open Access Journals (Sweden)

    Michael F. McNitt-Gray

    2015-02-01

    Full Text Available PURPOSE: To determine the variability of lesion size measurements in computed tomography data sets of patients imaged under a “no change” (“coffee break” condition and to determine the impact of two reading paradigms on measurement variability. METHOD AND MATERIALS: Using data sets from 32 non-small cell lung cancer patients scanned twice within 15 minutes (“no change”, measurements were performed by five radiologists in two phases: (1 independent reading of each computed tomography dataset (timepoint: (2 a locked, sequential reading of datasets. Readers performed measurements using several sizing methods, including one-dimensional (1D longest in-slice dimension and 3D semi-automated segmented volume. Change in size was estimated by comparing measurements performed on both timepoints for the same lesion, for each reader and each measurement method. For each reading paradigm, results were pooled across lesions, across readers, and across both readers and lesions, for each measurement method. RESULTS: The mean percent difference (±SD when pooled across both readers and lesions for 1D and 3D measurements extracted from contours was 2.8 ± 22.2% and 23.4 ± 105.0%, respectively, for the independent reads. For the locked, sequential reads, the mean percent differences (±SD reduced to 2.52 ± 14.2% and 7.4 ± 44.2% for the 1D and 3D measurements, respectively. CONCLUSION: Even under a “no change” condition between scans, there is variation in lesion size measurements due to repeat scans and variations in reader, lesion, and measurement method. This variation is reduced when using a locked, sequential reading paradigm compared to an independent reading paradigm.

  8. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  9. Critical threshold size for overwintering sandeels (Ammodytes marinus)

    DEFF Research Database (Denmark)

    Deurs, Mikael van; Hartvig, Martin; Steffensen, John Fleng

    2011-01-01

    scales with body size and increases with temperature, and the two factors together determine a critical threshold size for passive overwintering below which the organism is unlikely to survive without feeding. This is because the energetic cost of metabolism exceeds maximum energy reserves...... independent long-term overwintering experiments. Maximum attainable energy reserves were estimated from published data on A. marinus in the North Sea. The critical threshold size in terms of length (Lth) for A. marinus in the North Sea was estimated to be 9.5 cm. We then investigated two general predictions...

  10. Effects of display set size and its variability on the event-related potentials during a visual search task

    OpenAIRE

    Miyatani, Makoto; Sakata, Sumiko

    1999-01-01

    This study investigated the effects of display set size and its variability on the event-related potentials (ERPs) during a visual search task. In Experiment 1, subjects were required to respond if a visual display, which consisted of two, four, or six alphabets, contained one of two members of memory set. In Experiment 2, subjects detected the change of the shape of a fixation stimulus, which was surrounded by the same alphabets as in Experiment 1. In the search task (Experiment 1), the incr...

  11. Maximum Permissible Concentrations and Negligible Concentrations for pesticides

    NARCIS (Netherlands)

    Crommentuijn T; Kalf DF; Polder MD; Posthumus R; Plassche EJ van de; CSR

    1997-01-01

    Maximum Permissible Concentrations (MPCs) and Negligible Concentrations (NCs) derived for a series of pesticides are presented in this report. These MPCs and NCs are used by the Ministry of Housing, Spatial Planning and the Environment (VROM) to set Environmental Quality Objectives. For some of the

  12. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  13. Probabilistic Estimation of Critical Flaw Sizes in the Primary Structure Welds of the Ares I-X Launch Vehicle

    Science.gov (United States)

    Pai, Shantaram S.; Hoge, Peter A.; Patel, B. M.; Nagpal, Vinod K.

    2009-01-01

    The primary structure of the Ares I-X Upper Stage Simulator (USS) launch vehicle is constructed of welded mild steel plates. There is some concern over the possibility of structural failure due to welding flaws. It was considered critical to quantify the impact of uncertainties in residual stress, material porosity, applied loads, and material and crack growth properties on the reliability of the welds during its pre-flight and flight. A criterion--an existing maximum size crack at the weld toe must be smaller than the maximum allowable flaw size--was established to estimate the reliability of the welds. A spectrum of maximum allowable flaw sizes was developed for different possible combinations of all of the above listed variables by performing probabilistic crack growth analyses using the ANSYS finite element analysis code in conjunction with the NASGRO crack growth code. Two alternative methods were used to account for residual stresses: (1) The mean residual stress was assumed to be 41 ksi and a limit was set on the net section flow stress during crack propagation. The critical flaw size was determined by parametrically increasing the initial flaw size and detecting if this limit was exceeded during four complete flight cycles, and (2) The mean residual stress was assumed to be 49.6 ksi (the parent material s yield strength) and the net section flow stress limit was ignored. The critical flaw size was determined by parametrically increasing the initial flaw size and detecting if catastrophic crack growth occurred during four complete flight cycles. Both surface-crack models and through-crack models were utilized to characterize cracks in the weld toe.

  14. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  15. Plant Size and Competitive Dynamics along Nutrient Gradients.

    Science.gov (United States)

    Goldberg, Deborah E; Martina, Jason P; Elgersma, Kenneth J; Currie, William S

    2017-08-01

    Resource competition theory in plants has focused largely on resource acquisition traits that are independent of size, such as traits of individual leaves or roots or proportional allocation to different functions. However, plants also differ in maximum potential size, which could outweigh differences in module-level traits. We used a community ecosystem model called mondrian to investigate whether larger size inevitably increases competitive ability and how size interacts with nitrogen supply. Contrary to the conventional wisdom that bigger is better, we found that invader success and competitive ability are unimodal functions of maximum potential size, such that plants that are too large (or too small) are disproportionately suppressed by competition. Optimal size increases with nitrogen supply, even when plants compete for nitrogen only in a size-symmetric manner, although adding size-asymmetric competition for light does substantially increase the advantage of larger size at high nitrogen. These complex interactions of plant size and nitrogen supply lead to strong nonlinearities such that small differences in nitrogen can result in large differences in plant invasion success and the influence of competition along productivity gradients.

  16. Probable Maximum Earthquake Magnitudes for the Cascadia Subduction

    Science.gov (United States)

    Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.

    2013-12-01

    The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc

  17. A homeostatic clock sets daughter centriole size in flies

    Science.gov (United States)

    Aydogan, Mustafa G.; Steinacker, Thomas L.; Novak, Zsofia A.; Baumbach, Janina; Muschalik, Nadine

    2018-01-01

    Centrioles are highly structured organelles whose size is remarkably consistent within any given cell type. New centrioles are born when Polo-like kinase 4 (Plk4) recruits Ana2/STIL and Sas-6 to the side of an existing “mother” centriole. These two proteins then assemble into a cartwheel, which grows outwards to form the structural core of a new daughter. Here, we show that in early Drosophila melanogaster embryos, daughter centrioles grow at a linear rate during early S-phase and abruptly stop growing when they reach their correct size in mid- to late S-phase. Unexpectedly, the cartwheel grows from its proximal end, and Plk4 determines both the rate and period of centriole growth: the more active the centriolar Plk4, the faster centrioles grow, but the faster centriolar Plk4 is inactivated and growth ceases. Thus, Plk4 functions as a homeostatic clock, establishing an inverse relationship between growth rate and period to ensure that daughter centrioles grow to the correct size. PMID:29500190

  18. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  19. Laboratory test on maximum and minimum void ratio of tropical sand matrix soils

    Science.gov (United States)

    Othman, B. A.; Marto, A.

    2018-04-01

    Sand is generally known as loose granular material which has a grain size finer than gravel and coarser than silt and can be very angular to well-rounded in shape. The present of various amount of fines which also influence the loosest and densest state of sand in natural condition have been well known to contribute to the deformation and loss of shear strength of soil. This paper presents the effect of various range of fines content on minimum void ratio e min and maximum void ratio e max of sand matrix soils. Laboratory tests to determine e min and e max of sand matrix soil were conducted using non-standard method introduced by previous researcher. Clean sand was obtained from natural mining site at Johor, Malaysia. A set of 3 different sizes of sand (fine sand, medium sand, and coarse sand) were mixed with 0% to 40% by weight of low plasticity fine (kaolin). Results showed that generally e min and e max decreased with the increase of fines content up to a minimal value of 0% to 30%, and then increased back thereafter.

  20. Simultaneous identification of long similar substrings in large sets of sequences

    Directory of Open Access Journals (Sweden)

    Wittig Burghardt

    2007-05-01

    Full Text Available Abstract Background Sequence comparison faces new challenges today, with many complete genomes and large libraries of transcripts known. Gene annotation pipelines match these sequences in order to identify genes and their alternative splice forms. However, the software currently available cannot simultaneously compare sets of sequences as large as necessary especially if errors must be considered. Results We therefore present a new algorithm for the identification of almost perfectly matching substrings in very large sets of sequences. Its implementation, called ClustDB, is considerably faster and can handle 16 times more data than VMATCH, the most memory efficient exact program known today. ClustDB simultaneously generates large sets of exactly matching substrings of a given minimum length as seeds for a novel method of match extension with errors. It generates alignments of maximum length with a considered maximum number of errors within each overlapping window of a given size. Such alignments are not optimal in the usual sense but faster to calculate and often more appropriate than traditional alignments for genomic sequence comparisons, EST and full-length cDNA matching, and genomic sequence assembly. The method is used to check the overlaps and to reveal possible assembly errors for 1377 Medicago truncatula BAC-size sequences published at http://www.medicago.org/genome/assembly_table.php?chr=1. Conclusion The program ClustDB proves that window alignment is an efficient way to find long sequence sections of homogenous alignment quality, as expected in case of random errors, and to detect systematic errors resulting from sequence contaminations. Such inserts are systematically overlooked in long alignments controlled by only tuning penalties for mismatches and gaps. ClustDB is freely available for academic use.

  1. Computational analysis of the atomic size effect in bulk metallic glasses and their liquid precursors

    International Nuclear Information System (INIS)

    Kokotin, V.; Hermann, H.

    2008-01-01

    The atomic size effect and its consequences for the ability of multicomponent liquid alloys to form bulk metallic glasses are analyzed in terms of the generalized Bernal's model for liquids, following the hypothesis that maximum density in the liquid state improves the glass-forming ability. The maximum density that can be achieved in the liquid state is studied in the 2(N-1) dimensional parameter space of N-component systems. Computer simulations reveal that the size ratio of largest to smallest atoms are most relevant for achieving the maximum packing for N = 3-5, whereas the number of components plays a minor role. At small size ratio, the maximum packing density can be achieved by different atomic size distributions, whereas for medium size ratios the maximum density is always correlated to a concave size distribution. The relationship of the results to Miracle's efficient cluster packing model is also discussed

  2. Relation between the ion size and pore size for an electric double-layer capacitor.

    Science.gov (United States)

    Largeot, Celine; Portet, Cristelle; Chmiola, John; Taberna, Pierre-Louis; Gogotsi, Yury; Simon, Patrice

    2008-03-05

    The research on electrochemical double layer capacitors (EDLC), also known as supercapacitors or ultracapacitors, is quickly expanding because their power delivery performance fills the gap between dielectric capacitors and traditional batteries. However, many fundamental questions, such as the relations between the pore size of carbon electrodes, ion size of the electrolyte, and the capacitance have not yet been fully answered. We show that the pore size leading to the maximum double-layer capacitance of a TiC-derived carbon electrode in a solvent-free ethyl-methylimmidazolium-bis(trifluoro-methane-sulfonyl)imide (EMI-TFSI) ionic liquid is roughly equal to the ion size (approximately 0.7 nm). The capacitance values of TiC-CDC produced at 500 degrees C are more than 160 F/g and 85 F/cm(3) at 60 degrees C, while standard activated carbons with larger pores and a broader pore size distribution present capacitance values lower than 100 F/g and 50 F/cm(3) in ionic liquids. A significant drop in capacitance has been observed in pores that were larger or smaller than the ion size by just an angstrom, suggesting that the pore size must be tuned with sub-angstrom accuracy when selecting a carbon/ion couple. This work suggests a general approach to EDLC design leading to the maximum energy density, which has been now proved for both solvated organic salts and solvent-free liquid electrolytes.

  3. GRAIN SIZE CONSTRAINTS ON HL TAU WITH POLARIZATION SIGNATURE

    International Nuclear Information System (INIS)

    Kataoka, Akimasa; Dullemond, Cornelis P; Muto, Takayuki; Momose, Munetake; Tsukagoshi, Takashi

    2016-01-01

    The millimeter-wave polarization of the protoplanetary disk around HL Tau has been interpreted as the emission from elongated dust grains aligned with the magnetic field in the disk. However, the self-scattering of thermal dust emission may also explain the observed millimeter-wave polarization. In this paper, we report a modeling of the millimeter-wave polarization of the HL Tau disk with the self-polarization. Dust grains are assumed to be spherical and to have a power-law size distribution. We change the maximum grain size with a fixed dust composition in a fixed disk model to find the grain size to reproduce the observed signature. We find that the direction of the polarization vectors and the polarization degree can be explained with the self-scattering. Moreover, the polarization degree can be explained only if the maximum grain size is ∼150 μm. The obtained grain size from the polarization is different from that which has been previously expected from the spectral index of the dust opacity coefficient (a millimeter or larger) if the emission is optically thin. We discuss that porous dust aggregates may solve the inconsistency of the maximum grain size between the two constraints

  4. GRAIN SIZE CONSTRAINTS ON HL TAU WITH POLARIZATION SIGNATURE

    Energy Technology Data Exchange (ETDEWEB)

    Kataoka, Akimasa; Dullemond, Cornelis P [Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, Albert-Ueberle-Str. 2, D-69120 Heidelberg (Germany); Muto, Takayuki [Division of Liberal Arts, Kogakuin University, 1-24-2 Nishi-Shinjuku, Shinjuku-ku, Tokyo 163-8677 (Japan); Momose, Munetake; Tsukagoshi, Takashi, E-mail: kataoka@uni-heidelberg.de [College of Science, Ibaraki University, 2-1-1 Bunkyo, Mito, Ibaraki 310-8512 (Japan)

    2016-03-20

    The millimeter-wave polarization of the protoplanetary disk around HL Tau has been interpreted as the emission from elongated dust grains aligned with the magnetic field in the disk. However, the self-scattering of thermal dust emission may also explain the observed millimeter-wave polarization. In this paper, we report a modeling of the millimeter-wave polarization of the HL Tau disk with the self-polarization. Dust grains are assumed to be spherical and to have a power-law size distribution. We change the maximum grain size with a fixed dust composition in a fixed disk model to find the grain size to reproduce the observed signature. We find that the direction of the polarization vectors and the polarization degree can be explained with the self-scattering. Moreover, the polarization degree can be explained only if the maximum grain size is ∼150 μm. The obtained grain size from the polarization is different from that which has been previously expected from the spectral index of the dust opacity coefficient (a millimeter or larger) if the emission is optically thin. We discuss that porous dust aggregates may solve the inconsistency of the maximum grain size between the two constraints.

  5. Two-Agent Scheduling to Minimize the Maximum Cost with Position-Dependent Jobs

    Directory of Open Access Journals (Sweden)

    Long Wan

    2015-01-01

    Full Text Available This paper investigates a single-machine two-agent scheduling problem to minimize the maximum costs with position-dependent jobs. There are two agents, each with a set of independent jobs, competing to perform their jobs on a common machine. In our scheduling setting, the actual position-dependent processing time of one job is characterized by variable function dependent on the position of the job in the sequence. Each agent wants to fulfil the objective of minimizing the maximum cost of its own jobs. We develop a feasible method to achieve all the Pareto optimal points in polynomial time.

  6. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  7. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  8. The role of consumer satisfaction, consideration set size, variety seeking and convenience orientation in explaining seafood consumption in Vietnam

    OpenAIRE

    Ninh, Thi Kim Anh

    2010-01-01

    The study examines the relationship betweens convenience food and seafood consumption in Vietnam through a replication and an extension of studies of Rortveit and Olsen (2007; 2009). The main purpose of this study is to give an understanding of the role of consumers’ satisfaction, consideration set size, variety seeking, and convenience in explaining seafood consumption behavior in Vietnam.

  9. A maximum power point tracking algorithm for buoy-rope-drum wave energy converters

    Science.gov (United States)

    Wang, J. Q.; Zhang, X. C.; Zhou, Y.; Cui, Z. C.; Zhu, L. S.

    2016-08-01

    The maximum power point tracking control is the key link to improve the energy conversion efficiency of wave energy converters (WEC). This paper presents a novel variable step size Perturb and Observe maximum power point tracking algorithm with a power classification standard for control of a buoy-rope-drum WEC. The algorithm and simulation model of the buoy-rope-drum WEC are presented in details, as well as simulation experiment results. The results show that the algorithm tracks the maximum power point of the WEC fast and accurately.

  10. Myocardial infarct sizing by late gadolinium-enhanced MRI: Comparison of manual, full-width at half-maximum, and n-standard deviation methods.

    Science.gov (United States)

    Zhang, Lin; Huttin, Olivier; Marie, Pierre-Yves; Felblinger, Jacques; Beaumont, Marine; Chillou, Christian DE; Girerd, Nicolas; Mandry, Damien

    2016-11-01

    To compare three widely used methods for myocardial infarct (MI) sizing on late gadolinium-enhanced (LGE) magnetic resonance (MR) images: manual delineation and two semiautomated techniques (full-width at half-maximum [FWHM] and n-standard deviation [SD]). 3T phase-sensitive inversion-recovery (PSIR) LGE images of 114 patients after an acute MI (2-4 days and 6 months) were analyzed by two independent observers to determine both total and core infarct sizes (TIS/CIS). Manual delineation served as the reference for determination of optimal thresholds for semiautomated methods after thresholding at multiple values. Reproducibility and accuracy were expressed as overall bias ± 95% limits of agreement. Mean infarct sizes by manual methods were 39.0%/24.4% for the acute MI group (TIS/CIS) and 29.7%/17.3% for the chronic MI group. The optimal thresholds (ie, providing the closest mean value to the manual method) were FWHM30% and 3SD for the TIS measurement and FWHM45% and 6SD for the CIS measurement (paired t-test; all P > 0.05). The best reproducibility was obtained using FWHM. For TIS measurement in the acute MI group, intra-/interobserver agreements, from Bland-Altman analysis, with FWHM30%, 3SD, and manual were -0.02 ± 7.74%/-0.74 ± 5.52%, 0.31 ± 9.78%/2.96 ± 16.62% and -2.12 ± 8.86%/0.18 ± 16.12, respectively; in the chronic MI group, the corresponding values were 0.23 ± 3.5%/-2.28 ± 15.06, -0.29 ± 10.46%/3.12 ± 13.06% and 1.68 ± 6.52%/-2.88 ± 9.62%, respectively. A similar trend for reproducibility was obtained for CIS measurement. However, semiautomated methods produced inconsistent results (variabilities of 24-46%) compared to manual delineation. The FWHM technique was the most reproducible method for infarct sizing both in acute and chronic MI. However, both FWHM and n-SD methods showed limited accuracy compared to manual delineation. J. Magn. Reson. Imaging 2016;44:1206-1217. © 2016 International Society for Magnetic Resonance in Medicine.

  11. Impact of marine reserve on maximum sustainable yield in a traditional prey-predator system

    Science.gov (United States)

    Paul, Prosenjit; Kar, T. K.; Ghorai, Abhijit

    2018-01-01

    Multispecies fisheries management requires managers to consider the impact of fishing activities on several species as fishing impacts both targeted and non-targeted species directly or indirectly in several ways. The intended goal of traditional fisheries management is to achieve maximum sustainable yield (MSY) from the targeted species, which on many occasions affect the targeted species as well as the entire ecosystem. Marine reserves are often acclaimed as the marine ecosystem management tool. Few attempts have been made to generalize the ecological effects of marine reserve on MSY policy. We examine here how MSY and population level in a prey-predator system are affected by the low, medium and high reserve size under different possible scenarios. Our simulation works shows that low reserve area, the value of MSY for prey exploitation is maximum when both prey and predator species have fast movement rate. For medium reserve size, our analysis revealed that the maximum value of MSY for prey exploitation is obtained when prey population has fast movement rate and predator population has slow movement rate. For high reserve area, the maximum value of MSY for prey's exploitation is very low compared to the maximum value of MSY for prey's exploitation in case of low and medium reserve. On the other hand, for low and medium reserve area, MSY for predator exploitation is maximum when both the species have fast movement rate.

  12. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  13. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2013-06-01

    Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  14. PTree: pattern-based, stochastic search for maximum parsimony phylogenies.

    Science.gov (United States)

    Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  15. STUDY ON MAXIMUM SPECIFIC SLUDGE ACIVITY OF DIFFERENT ANAEROBIC GRANULAR SLUDGE BY BATCH TESTS

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The maximum specific sludge activity of granular sludge from large-scale UASB, IC and Biobed anaerobic reactors were investigated by batch tests. The limitation factors related to maximum specific sludge activity (diffusion, substrate sort, substrate concentration and granular size) were studied. The general principle and procedure for the precise measurement of maximum specific sludge activity were suggested. The potential capacity of loading rate of the IC and Biobed anaerobic reactors were analyzed and compared by use of the batch tests results.

  16. Radiation-induced pollen germination, tube growth, its localized cytochemical constituents, fruit set and fruit size in alkaloid yielding species Solanum torvum L

    International Nuclear Information System (INIS)

    Chauhan, Y.S.; Katiyar, S.R.

    1990-01-01

    The volume of pollen, total number of pollen/flower, the percent of pollen germination and tube growth of long-styled flower were higher than the short-styled flowers in S. torvum. In addition, the pollination studies were conducted among the four selected sets for optimum fruit set investigation. Fruit set was not seen in both the first and second sets (female shorts-short male and female short-long male). However, the maximum fruit set was obtained in the fourth set (female long-male long). Pollen grains of long-styled flowers irradiated with 1-800 krad were germinated in the basal medium. The percent of pollen germination and the tube growth was stimulated over the control with 1 and 50 krad dose exposures, but increasing dose rates inhibited both the above processes. Utilization of insoluble polysaccharides, and the synthesis of RNA and protein were enhanced over the control with the effect of 50 krad. The higher (800 krad) dose exposures inhibited all the above cytochemical constituents. Various dose-treated pollens were used to pollinate the stigma surface of the long-styled flowers. The fruit set, fruit volume, fresh and dry weight of fruits, and the number of seed set/fruit, were enhanced over the control by 1 and 50 krad, while the higher doses caused inhibitory effect. Interestingly, the fruit set was not caused by radiation doses 400 krad and above. (author)

  17. Load calculations of radiant cooling systems for sizing the plant

    DEFF Research Database (Denmark)

    Bourdakis, Eleftherios; Kazanci, Ongun Berk; Olesen, Bjarne W.

    2015-01-01

    The aim of this study was, by using a building simulation software, to prove that a radiant cooling system should not be sized based on the maximum cooling load but at a lower value. For that reason six radiant cooling models were simulated with two control principles using 100%, 70% and 50......% of the maximum cooling load. It was concluded that all tested systems were able to provide an acceptable thermal environment even when the 50% of the maximum cooling load was used. From all the simulated systems the one that performed the best under both control principles was the ESCS ceiling system. Finally...... it was proved that ventilation systems should be sized based on the maximum cooling load....

  18. Maximum Power Point Tracking in Variable Speed Wind Turbine Based on Permanent Magnet Synchronous Generator Using Maximum Torque Sliding Mode Control Strategy

    Institute of Scientific and Technical Information of China (English)

    Esmaeil Ghaderi; Hossein Tohidi; Behnam Khosrozadeh

    2017-01-01

    The present study was carried out in order to track the maximum power point in a variable speed turbine by minimizing electromechanical torque changes using a sliding mode control strategy.In this strategy,fhst,the rotor speed is set at an optimal point for different wind speeds.As a result of which,the tip speed ratio reaches an optimal point,mechanical power coefficient is maximized,and wind turbine produces its maximum power and mechanical torque.Then,the maximum mechanical torque is tracked using electromechanical torque.In this technique,tracking error integral of maximum mechanical torque,the error,and the derivative of error are used as state variables.During changes in wind speed,sliding mode control is designed to absorb the maximum energy from the wind and minimize the response time of maximum power point tracking (MPPT).In this method,the actual control input signal is formed from a second order integral operation of the original sliding mode control input signal.The result of the second order integral in this model includes control signal integrity,full chattering attenuation,and prevention from large fluctuations in the power generator output.The simulation results,calculated by using MATLAB/m-file software,have shown the effectiveness of the proposed control strategy for wind energy systems based on the permanent magnet synchronous generator (PMSG).

  19. Estimation of Maximum Allowable PV Connection to LV Residential Power Networks

    DEFF Research Database (Denmark)

    Demirok, Erhan; Sera, Dezso; Teodorescu, Remus

    2011-01-01

    Maximum photovoltaic (PV) hosting capacity of low voltage (LV) power networks is mainly restricted by either thermal limits of network components or grid voltage quality resulted from high penetration of distributed PV systems. This maximum hosting capacity may be lower than the available solar...... potential of geographic area due to power network limitations even though all rooftops are fully occupied with PV modules. Therefore, it becomes more of an issue to know what exactly limits higher PV penetration level and which solutions should be engaged efficiently such as over sizing distribution...

  20. Hierarchical complexity and the size limits of life.

    Science.gov (United States)

    Heim, Noel A; Payne, Jonathan L; Finnegan, Seth; Knope, Matthew L; Kowalewski, Michał; Lyons, S Kathleen; McShea, Daniel W; Novack-Gottshall, Philip M; Smith, Felisa A; Wang, Steve C

    2017-06-28

    Over the past 3.8 billion years, the maximum size of life has increased by approximately 18 orders of magnitude. Much of this increase is associated with two major evolutionary innovations: the evolution of eukaryotes from prokaryotic cells approximately 1.9 billion years ago (Ga), and multicellular life diversifying from unicellular ancestors approximately 0.6 Ga. However, the quantitative relationship between organismal size and structural complexity remains poorly documented. We assessed this relationship using a comprehensive dataset that includes organismal size and level of biological complexity for 11 172 extant genera. We find that the distributions of sizes within complexity levels are unimodal, whereas the aggregate distribution is multimodal. Moreover, both the mean size and the range of size occupied increases with each additional level of complexity. Increases in size range are non-symmetric: the maximum organismal size increases more than the minimum. The majority of the observed increase in organismal size over the history of life on the Earth is accounted for by two discrete jumps in complexity rather than evolutionary trends within levels of complexity. Our results provide quantitative support for an evolutionary expansion away from a minimal size constraint and suggest a fundamental rescaling of the constraints on minimal and maximal size as biological complexity increases. © 2017 The Author(s).

  1. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    Science.gov (United States)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  2. Maximum power point tracker for photovoltaic power plants

    Science.gov (United States)

    Arcidiacono, V.; Corsi, S.; Lambri, L.

    The paper describes two different closed-loop control criteria for the maximum power point tracking of the voltage-current characteristic of a photovoltaic generator. The two criteria are discussed and compared, inter alia, with regard to the setting-up problems that they pose. Although a detailed analysis is not embarked upon, the paper also provides some quantitative information on the energy advantages obtained by using electronic maximum power point tracking systems, as compared with the situation in which the point of operation of the photovoltaic generator is not controlled at all. Lastly, the paper presents two high-efficiency MPPT converters for experimental photovoltaic plants of the stand-alone and the grid-interconnected type.

  3. Novel maximum-margin training algorithms for supervised neural networks.

    Science.gov (United States)

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by

  4. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  5. Quantitative Maximum Shear-Wave Stiffness of Breast Masses as a Predictor of Histopathologic Severity.

    Science.gov (United States)

    Berg, Wendie A; Mendelson, Ellen B; Cosgrove, David O; Doré, Caroline J; Gay, Joel; Henry, Jean-Pierre; Cohen-Bacrie, Claude

    2015-08-01

    The objective of our study was to compare quantitative maximum breast mass stiffness on shear-wave elastography (SWE) with histopathologic outcome. From September 2008 through September 2010, at 16 centers in the United States and Europe, 1647 women with a sonographically visible breast mass consented to undergo quantitative SWE in this prospective protocol; 1562 masses in 1562 women had an acceptable reference standard. The quantitative maximum stiffness (termed "Emax") on three acquisitions was recorded for each mass with the range set from 0 (very soft) to 180 kPa (very stiff). The median Emax and interquartile ranges (IQRs) were determined as a function of histopathologic diagnosis and were compared using the Mann-Whitney U test. We considered the impact of mass size on maximum stiffness by performing the same comparisons for masses 9 mm or smaller and those larger than 9 mm in diameter. The median patient age was 50 years (mean, 51.8 years; SD, 14.5 years; range, 21-94 years), and the median lesion diameter was 12 mm (mean, 14 mm; SD, 7.9 mm; range, 1-53 mm). The median Emax of the 1562 masses (32.1% malignant) was 71 kPa (mean, 90 kPa; SD, 65 kPa; IQR, 31-170 kPa). Of 502 malignancies, 23 (4.6%) ductal carcinoma in situ (DCIS) masses had a median Emax of 126 kPa (IQR, 71-180 kPa) and were less stiff than 468 invasive carcinomas (median Emax, 180 kPa [IQR, 138-180 kPa]; p = 0.002). Benign lesions were much softer than malignancies (median Emax, 43 kPa [IQR, 24-83 kPa] vs 180 kPa [IQR, 129-180 kPa]; p masses. Despite overlap in Emax values, maximum stiffness measured by SWE is a highly effective predictor of the histopathologic severity of sonographically depicted breast masses.

  6. Noise and physical limits to maximum resolution of PET images

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es

    2007-10-01

    In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.

  7. Noise and physical limits to maximum resolution of PET images

    International Nuclear Information System (INIS)

    Herraiz, J.L.; Espana, S.; Vicente, E.; Vaquero, J.J.; Desco, M.; Udias, J.M.

    2007-01-01

    In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners

  8. Optimizing the calculation of point source count-centroid in pixel size measurement

    International Nuclear Information System (INIS)

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Purpose: Pixel size is an important parameter of gamma camera and SPECT. A number of Methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source(PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (Xm) is an approximation of the true count-centroid (Xp) of the PS, i.e. Xm=Xp+(Xb-Xp)/(1+Rp/Rb), where Rp is the net counting rate of the PS, Xb the background count-centroid and Rb the background counting rate. To get accurate measurement, Rp must be very big, which is unpractical, resulting in the variation of measured pixel size. Rp-independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (Xb-Xp)/(1+Rp/Rb) by bringing Xb closer to Xp and by reducing Rb. In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=I-(0.5)D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent Xp. The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128*128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10cps to 1183cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01( mean= 3.01±0.00) as Rp increased

  9. Body Size Distribution of the Dinosaurs

    OpenAIRE

    O?Gorman, Eoin J.; Hone, David W. E.

    2012-01-01

    The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutiona...

  10. Improving Bayesian credibility intervals for classifier error rates using maximum entropy empirical priors.

    Science.gov (United States)

    Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders

    2010-06-01

    Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.

  11. n-Order and maximum fuzzy similarity entropy for discrimination of signals of different complexity: Application to fetal heart rate signals.

    Science.gov (United States)

    Zaylaa, Amira; Oudjemia, Souad; Charara, Jamal; Girault, Jean-Marc

    2015-09-01

    This paper presents two new concepts for discrimination of signals of different complexity. The first focused initially on solving the problem of setting entropy descriptors by varying the pattern size instead of the tolerance. This led to the search for the optimal pattern size that maximized the similarity entropy. The second paradigm was based on the n-order similarity entropy that encompasses the 1-order similarity entropy. To improve the statistical stability, n-order fuzzy similarity entropy was proposed. Fractional Brownian motion was simulated to validate the different methods proposed, and fetal heart rate signals were used to discriminate normal from abnormal fetuses. In all cases, it was found that it was possible to discriminate time series of different complexity such as fractional Brownian motion and fetal heart rate signals. The best levels of performance in terms of sensitivity (90%) and specificity (90%) were obtained with the n-order fuzzy similarity entropy. However, it was shown that the optimal pattern size and the maximum similarity measurement were related to intrinsic features of the time series. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. 49 CFR 229.73 - Wheel sets.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Wheel sets. 229.73 Section 229.73 Transportation... TRANSPORTATION RAILROAD LOCOMOTIVE SAFETY STANDARDS Safety Requirements Suspension System § 229.73 Wheel sets. (a...) when applied or turned. (b) The maximum variation in the diameter between any two wheel sets in a three...

  13. 49 CFR Appendix B to Part 386 - Penalty Schedule; Violations and Maximum Civil Penalties

    Science.gov (United States)

    2010-10-01

    ... Maximum Civil Penalties The Debt Collection Improvement Act of 1996 [Public Law 104-134, title III... civil penalties set out in paragraphs (e)(1) through (4) of this appendix results in death, serious... 49 Transportation 5 2010-10-01 2010-10-01 false Penalty Schedule; Violations and Maximum Civil...

  14. Maximum margin classifier working in a set of strings.

    Science.gov (United States)

    Koyano, Hitoshi; Hayashida, Morihiro; Akutsu, Tatsuya

    2016-03-01

    Numbers and numerical vectors account for a large portion of data. However, recently, the amount of string data generated has increased dramatically. Consequently, classifying string data is a common problem in many fields. The most widely used approach to this problem is to convert strings into numerical vectors using string kernels and subsequently apply a support vector machine that works in a numerical vector space. However, this non-one-to-one conversion involves a loss of information and makes it impossible to evaluate, using probability theory, the generalization error of a learning machine, considering that the given data to train and test the machine are strings generated according to probability laws. In this study, we approach this classification problem by constructing a classifier that works in a set of strings. To evaluate the generalization error of such a classifier theoretically, probability theory for strings is required. Therefore, we first extend a limit theorem for a consensus sequence of strings demonstrated by one of the authors and co-workers in a previous study. Using the obtained result, we then demonstrate that our learning machine classifies strings in an asymptotically optimal manner. Furthermore, we demonstrate the usefulness of our machine in practical data analysis by applying it to predicting protein-protein interactions using amino acid sequences and classifying RNAs by the secondary structure using nucleotide sequences.

  15. Particle size distributions of lead measured in battery manufacturing and secondary smelter facilities and implications in setting workplace lead exposure limits.

    Science.gov (United States)

    Petito Boyce, Catherine; Sax, Sonja N; Cohen, Joel M

    2017-08-01

    Inhalation plays an important role in exposures to lead in airborne particulate matter in occupational settings, and particle size determines where and how much of airborne lead is deposited in the respiratory tract and how much is subsequently absorbed into the body. Although some occupational airborne lead particle size data have been published, limited information is available reflecting current workplace conditions in the U.S. To address this data gap, the Battery Council International (BCI) conducted workplace monitoring studies at nine lead acid battery manufacturing facilities (BMFs) and five secondary smelter facilities (SSFs) across the U.S. This article presents the results of the BCI studies focusing on the particle size distributions calculated from Personal Marple Impactor sampling data and particle deposition estimates in each of the three major respiratory tract regions derived using the Multiple-Path Particle Dosimetry model. The BCI data showed the presence of predominantly larger-sized particles in the work environments evaluated, with average mass median aerodynamic diameters (MMADs) ranging from 21-32 µm for the three BMF job categories and from 15-25 µm for the five SSF job categories tested. The BCI data also indicated that the percentage of lead mass measured at the sampled facilities in the submicron range (i.e., lead) was generally small. The estimated average percentages of lead mass in the submicron range for the tested job categories ranged from 0.8-3.3% at the BMFs and from 0.44-6.1% at the SSFs. Variability was observed in the particle size distributions across job categories and facilities, and sensitivity analyses were conducted to explore this variability. The BCI results were compared with results reported in the scientific literature. Screening-level analyses were also conducted to explore the overall degree of lead absorption potentially associated with the observed particle size distributions and to identify key issues

  16. An entropy approach to size and variance heterogeneity

    NARCIS (Netherlands)

    Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.

    2012-01-01

    In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity

  17. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.

    Science.gov (United States)

    Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz

    2017-10-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.

  18. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  19. Simulation model of ANN based maximum power point tracking controller for solar PV system

    Energy Technology Data Exchange (ETDEWEB)

    Rai, Anil K.; Singh, Bhupal [Department of Electrical and Electronics Engineering, Ajay Kumar Garg Engineering College, Ghaziabad 201009 (India); Kaushika, N.D.; Agarwal, Niti [School of Research and Development, Bharati Vidyapeeth College of Engineering, A-4 Paschim Vihar, New Delhi 110063 (India)

    2011-02-15

    In this paper the simulation model of an artificial neural network (ANN) based maximum power point tracking controller has been developed. The controller consists of an ANN tracker and the optimal control unit. The ANN tracker estimates the voltages and currents corresponding to a maximum power delivered by solar PV (photovoltaic) array for variable cell temperature and solar radiation. The cell temperature is considered as a function of ambient air temperature, wind speed and solar radiation. The tracker is trained employing a set of 124 patterns using the back propagation algorithm. The mean square error of tracker output and target values is set to be of the order of 10{sup -5} and the successful convergent of learning process takes 1281 epochs. The accuracy of the ANN tracker has been validated by employing different test data sets. The control unit uses the estimates of the ANN tracker to adjust the duty cycle of the chopper to optimum value needed for maximum power transfer to the specified load. (author)

  20. New shower maximum trigger for electrons and photons at CDF

    International Nuclear Information System (INIS)

    Amidei, D.; Burkett, K.; Gerdes, D.; Miao, C.; Wolinski, D.

    1994-01-01

    For the 1994 Tevatron collider run, CDF has upgraded the electron and photo trigger hardware to make use of shower position and size information from the central shower maximum detector. For electrons, the upgrade has resulted in a 50% reduction in backgrounds while retaining approximately 90% of the signal. The new trigger also eliminates the background to photon triggers from single-phototube spikes

  1. New shower maximum trigger for electrons and photons at CDF

    International Nuclear Information System (INIS)

    Gerdes, D.

    1994-08-01

    For the 1994 Tevatron collider run, CDF has upgraded the electron and photon trigger hardware to make use of shower position and size information from the central shower maximum detector. For electrons, the upgrade has resulted in a 50% reduction in backgrounds while retaining approximately 90% of the signal. The new trigger also eliminates the background to photon triggers from single-phototube discharge

  2. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    Science.gov (United States)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  3. Design of pore size of macroporous ceramic substrates

    International Nuclear Information System (INIS)

    Szewald, O.; Kotsis, I.

    2000-01-01

    A method has been developed for the design of macro-porous ceramic substrates. Based on geometrical and regression models detailed technology was worked out for producing these 100% open porous filters, which were made using quasi homo-disperse fractions of corundum of diameters of several tens and hundreds microns and glassy binding material. Axial pressing was used as a forming process. Pore networks with size distribution that can be defined by a curve having one maximum were provided applying the above technology. Based on geometrical considerations and measurements it was proved that these maximums are at characteristic pore sizes that depend only on characteristic size of the original grain fractions and on the extent of the axial forming pressure. Copyright (2000) AD-TECH - International Foundation for the Advancement of Technology Ltd

  4. Experimental river delta size set by multiple floods and backwater hydrodynamics.

    Science.gov (United States)

    Ganti, Vamsi; Chadwick, Austin J; Hassenruck-Gudipati, Hima J; Fuller, Brian M; Lamb, Michael P

    2016-05-01

    River deltas worldwide are currently under threat of drowning and destruction by sea-level rise, subsidence, and oceanic storms, highlighting the need to quantify their growth processes. Deltas are built through construction of sediment lobes, and emerging theories suggest that the size of delta lobes scales with backwater hydrodynamics, but these ideas are difficult to test on natural deltas that evolve slowly. We show results of the first laboratory delta built through successive deposition of lobes that maintain a constant size. We show that the characteristic size of delta lobes emerges because of a preferential avulsion node-the location where the river course periodically and abruptly shifts-that remains fixed spatially relative to the prograding shoreline. The preferential avulsion node in our experiments is a consequence of multiple river floods and Froude-subcritical flows that produce persistent nonuniform flows and a peak in net channel deposition within the backwater zone of the coastal river. In contrast, experimental deltas without multiple floods produce flows with uniform velocities and delta lobes that lack a characteristic size. Results have broad applications to sustainable management of deltas and for decoding their stratigraphic record on Earth and Mars.

  5. An extension theory-based maximum power tracker using a particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Chao, Kuei-Hsiang

    2014-01-01

    Highlights: • We propose an adaptive maximum power point tracking (MPPT) approach for PV systems. • Transient and steady state performances in tracking process are improved. • The proposed MPPT can automatically tune tracking step size along a P–V curve. • A PSO algorithm is used to determine the weighting values of extension theory. - Abstract: The aim of this work is to present an adaptive maximum power point tracking (MPPT) approach for photovoltaic (PV) power generation system. Integrating the extension theory as well as the conventional perturb and observe method, an maximum power point (MPP) tracker is made able to automatically tune tracking step size by way of the category recognition along a P–V characteristic curve. Accordingly, the transient and steady state performances in tracking process are improved. Furthermore, an optimization approach is proposed on the basis of a particle swarm optimization (PSO) algorithm for the complexity reduction in the determination of weighting values. At the end of this work, a simulated improvement in the tracking performance is experimentally validated by an MPP tracker with a programmable system-on-chip (PSoC) based controller

  6. Gravitational Waves and the Maximum Spin Frequency of Neutron Stars

    NARCIS (Netherlands)

    Patruno, A.; Haskell, B.; D'Angelo, C.

    2012-01-01

    In this paper, we re-examine the idea that gravitational waves are required as a braking mechanism to explain the observed maximum spin frequency of neutron stars. We show that for millisecond X-ray pulsars, the existence of spin equilibrium as set by the disk/magnetosphere interaction is sufficient

  7. The scaling of maximum and basal metabolic rates of mammals and birds

    Science.gov (United States)

    Barbosa, Lauro A.; Garcia, Guilherme J. M.; da Silva, Jafferson K. L.

    2006-01-01

    Allometric scaling is one of the most pervasive laws in biology. Its origin, however, is still a matter of dispute. Recent studies have established that maximum metabolic rate scales with an exponent larger than that found for basal metabolism. This unpredicted result sets a challenge that can decide which of the concurrent hypotheses is the correct theory. Here, we show that both scaling laws can be deduced from a single network model. Besides the 3/4-law for basal metabolism, the model predicts that maximum metabolic rate scales as M, maximum heart rate as M, and muscular capillary density as M, in agreement with data.

  8. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  9. A Stochastic Maximum Principle for General Mean-Field Systems

    International Nuclear Information System (INIS)

    Buckdahn, Rainer; Li, Juan; Ma, Jin

    2016-01-01

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.

  10. A Stochastic Maximum Principle for General Mean-Field Systems

    Energy Technology Data Exchange (ETDEWEB)

    Buckdahn, Rainer, E-mail: Rainer.Buckdahn@univ-brest.fr [Université de Bretagne-Occidentale, Département de Mathématiques (France); Li, Juan, E-mail: juanli@sdu.edu.cn [Shandong University, Weihai, School of Mathematics and Statistics (China); Ma, Jin, E-mail: jinma@usc.edu [University of Southern California, Department of Mathematics (United States)

    2016-12-15

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.

  11. 38 CFR 18.434 - Education setting.

    Science.gov (United States)

    2010-07-01

    ... not handicapped to the maximum extent appropriate to the needs of the handicapped person. A recipient shall place a handicapped person in the regular educational environment operated by the recipient unless... Adult Education § 18.434 Education setting. (a) Academic setting. A recipient shall educate, or shall...

  12. Every plane graph of maximum degree 8 has an edge-face 9-colouring.

    NARCIS (Netherlands)

    R.J. Kang (Ross); J.-S. Sereni; M. Stehlík

    2011-01-01

    textabstractAn edge-face coloring of a plane graph with edge set $E$ and face set $F$ is a coloring of the elements of $E\\cup F$ such that adjacent or incident elements receive different colors. Borodin proved that every plane graph of maximum degree $\\Delta \\ge 10$ can be edge-face colored with

  13. The Local Maximum Clustering Method and Its Application in Microarray Gene Expression Data Analysis

    Directory of Open Access Journals (Sweden)

    Chen Yidong

    2004-01-01

    Full Text Available An unsupervised data clustering method, called the local maximum clustering (LMC method, is proposed for identifying clusters in experiment data sets based on research interest. A magnitude property is defined according to research purposes, and data sets are clustered around each local maximum of the magnitude property. By properly defining a magnitude property, this method can overcome many difficulties in microarray data clustering such as reduced projection in similarities, noises, and arbitrary gene distribution. To critically evaluate the performance of this clustering method in comparison with other methods, we designed three model data sets with known cluster distributions and applied the LMC method as well as the hierarchic clustering method, the -mean clustering method, and the self-organized map method to these model data sets. The results show that the LMC method produces the most accurate clustering results. As an example of application, we applied the method to cluster the leukemia samples reported in the microarray study of Golub et al. (1999.

  14. Optimizing the calculation of point source count-centroid in pixel size measurement

    International Nuclear Information System (INIS)

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Pixel size is an important parameter of gamma camera and SPECT. A number of methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source (PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (X m ) is an approximation of the true count-centroid (X p ) of the PS, i.e. X m =X p + (X b -X p )/(1+R p /R b ), where Rp is the net counting rate of the PS, X b the background count-centroid and Rb the background counting. To get accurate measurement, R p must be very big, which is unpractical, resulting in the variation of measured pixel size. R p -independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (X b -X p )/(1 + R p /R b ) by bringing X b closer to X p and by reducing R b . In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=1-(0.5) D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent X p . The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128 x 128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10 cps to 1183 cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01 (mean

  15. From the Cover: Environmental and biotic controls on the evolutionary history of insect body size

    Science.gov (United States)

    Clapham, Matthew E.; Karr, Jered A.

    2012-07-01

    Giant insects, with wingspans as large as 70 cm, ruled the Carboniferous and Permian skies. Gigantism has been linked to hyperoxic conditions because oxygen concentration is a key physiological control on body size, particularly in groups like flying insects that have high metabolic oxygen demands. Here we show, using a dataset of more than 10,500 fossil insect wing lengths, that size tracked atmospheric oxygen concentrations only for the first 150 Myr of insect evolution. The data are best explained by a model relating maximum size to atmospheric environmental oxygen concentration (pO2) until the end of the Jurassic, and then at constant sizes, independent of oxygen fluctuations, during the Cretaceous and, at a smaller size, the Cenozoic. Maximum insect size decreased even as atmospheric pO2 rose in the Early Cretaceous following the evolution and radiation of early birds, particularly as birds acquired adaptations that allowed more agile flight. A further decrease in maximum size during the Cenozoic may relate to the evolution of bats, the Cretaceous mass extinction, or further specialization of flying birds. The decoupling of insect size and atmospheric pO2 coincident with the radiation of birds suggests that biotic interactions, such as predation and competition, superseded oxygen as the most important constraint on maximum body size of the largest insects.

  16. MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker

    Energy Technology Data Exchange (ETDEWEB)

    Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw

    2009-06-09

    MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.

  17. Dual influences of ecosystem size and disturbance on food chain length in streams.

    Science.gov (United States)

    McHugh, Peter A; McIntosh, Angus R; Jellyman, Phillip G

    2010-07-01

    The number of trophic transfers occurring between basal resources and top predators, food chain length (FCL), varies widely in the world's ecosystems for reasons that are poorly understood, particularly for stream ecosystems. Available evidence indicates that FCL is set by energetic constraints, environmental stochasticity, or ecosystem size effects, although no single explanation has yet accounted for FCL patterns in a broad sense. Further, whether environmental disturbance can influence FCL has been debated on both theoretical and empirical grounds for quite some time. Using data from sixteen South Island, New Zealand streams, we determined whether the so-called ecosystem size, disturbance, or resource availability hypotheses could account for FCL variation in high country fluvial environments. Stable isotope-based estimates of maximum trophic position ranged from 2.6 to 4.2 and averaged 3.5, a value on par with the global FCL average for streams. Model-selection results indicated that stream size and disturbance regime best explained across-site patterns in FCL, although resource availability was negatively correlated with our measure of disturbance; FCL approached its maximum in large, stable springs and was disturbed streams. Community data indicate that size influenced FCL, primarily through its influence on local fish species richness (i.e., via trophic level additions and/or insertions), whereas disturbance did so via an effect on the relative availability of intermediate predators (i.e., predatory invertebrates) as prey for fishes. Overall, our results demonstrate that disturbance can have an important food web-structuring role in stream ecosystems, and further imply that pluralistic explanations are needed to fully understand the range of structural variation observed for real food webs.

  18. Evolution of body size in Galapagos marine iguanas.

    Science.gov (United States)

    Wikelski, Martin

    2005-10-07

    Body size is one of the most important traits of organisms and allows predictions of an individual's morphology, physiology, behaviour and life history. However, explaining the evolution of complex traits such as body size is difficult because a plethora of other traits influence body size. Here I review what we know about the evolution of body size in a group of island reptiles and try to generalize about the mechanisms that shape body size. Galapagos marine iguanas occupy all 13 larger islands in this Pacific archipelago and have maximum island body weights between 900 and 12 000g. The distribution of body sizes does not match mitochondrial clades, indicating that body size evolves independently of genetic relatedness. Marine iguanas lack intra- and inter-specific food competition and predators are not size-specific, discounting these factors as selective agents influencing body size. Instead I hypothesize that body size reflects the trade-offs between sexual and natural selection. We found that sexual selection continuously favours larger body sizes. Large males establish display territories and some gain over-proportional reproductive success in the iguanas' mating aggregations. Females select males based on size and activity and are thus responsible for the observed mating skew. However, large individuals are strongly selected against during El Niño-related famines when dietary algae disappear from the intertidal foraging areas. We showed that differences in algae sward ('pasture') heights and thermal constraints on large size are causally responsible for differences in maximum body size among populations. I hypothesize that body size in many animal species reflects a trade-off between foraging constraints and sexual selection and suggest that future research could focus on physiological and genetic mechanisms determining body size in wild animals. Furthermore, evolutionary stable body size distributions within populations should be analysed to better

  19. Optimal plot size in the evaluation of papaya scions: proposal and comparison of methods

    Directory of Open Access Journals (Sweden)

    Humberto Felipe Celanti

    Full Text Available ABSTRACT Evaluating the quality of scions is extremely important and it can be done by characteristics of shoots and roots. This experiment evaluated height of the aerial part, stem diameter, number of leaves, petiole length and length of roots of papaya seedlings. Analyses were performed from a blank trial with 240 seedlings of "Golden Pecíolo Curto". The determination of the optimum plot size was done by applying the methods of maximum curvature, maximum curvature of coefficient of variation and a new proposed method, which incorporates the bootstrap resampling simulation to the maximum curvature method. According to the results obtained, five is the optimal number of seedlings of papaya "Golden Pecíolo Curto" per plot. The proposed method of bootstrap simulation with replacement provides optimal plot sizes equal or higher than the maximum curvature method and provides same plot size than maximum curvature method of the coefficient of variation.

  20. Catastrophic Disruption Threshold and Maximum Deflection from Kinetic Impact

    Science.gov (United States)

    Cheng, A. F.

    2017-12-01

    The use of a kinetic impactor to deflect an asteroid on a collision course with Earth was described in the NASA Near-Earth Object Survey and Deflection Analysis of Alternatives (2007) as the most mature approach for asteroid deflection and mitigation. The NASA DART mission will demonstrate asteroid deflection by kinetic impact at the Potentially Hazardous Asteroid 65803 Didymos in October, 2022. The kinetic impactor approach is considered to be applicable with warning times of 10 years or more and with hazardous asteroid diameters of 400 m or less. In principle, a larger kinetic impactor bringing greater kinetic energy could cause a larger deflection, but input of excessive kinetic energy will cause catastrophic disruption of the target, leaving possibly large fragments still on collision course with Earth. Thus the catastrophic disruption threshold limits the maximum deflection from a kinetic impactor. An often-cited rule of thumb states that the maximum deflection is 0.1 times the escape velocity before the target will be disrupted. It turns out this rule of thumb does not work well. A comparison to numerical simulation results shows that a similar rule applies in the gravity limit, for large targets more than 300 m, where the maximum deflection is roughly the escape velocity at momentum enhancement factor β=2. In the gravity limit, the rule of thumb corresponds to pure momentum coupling (μ=1/3), but simulations find a slightly different scaling μ=0.43. In the smaller target size range that kinetic impactors would apply to, the catastrophic disruption limit is strength-controlled. A DART-like impactor won't disrupt any target asteroid down to significantly smaller size than the 50 m below which a hazardous object would not penetrate the atmosphere in any case unless it is unusually strong.

  1. Rumor Identification with Maximum Entropy in MicroNet

    Directory of Open Access Journals (Sweden)

    Suisheng Yu

    2017-01-01

    Full Text Available The widely used applications of Microblog, WeChat, and other social networking platforms (that we call MicroNet shorten the period of information dissemination and expand the range of information dissemination, which allows rumors to cause greater harm and have more influence. A hot topic in the information dissemination field is how to identify and block rumors. Based on the maximum entropy model, this paper constructs the recognition mechanism of rumor information in the micronetwork environment. First, based on the information entropy theory, we obtained the characteristics of rumor information using the maximum entropy model. Next, we optimized the original classifier training set and the feature function to divide the information into rumors and nonrumors. Finally, the experimental simulation results show that the rumor identification results using this method are better than the original classifier and other related classification methods.

  2. Effect of freeze-thaw cycling on grain size of biochar.

    Science.gov (United States)

    Liu, Zuolin; Dugan, Brandon; Masiello, Caroline A; Wahab, Leila M; Gonnermann, Helge M; Nittrouer, Jeffrey A

    2018-01-01

    Biochar may improve soil hydrology by altering soil porosity, density, hydraulic conductivity, and water-holding capacity. These properties are associated with the grain size distributions of both soil and biochar, and therefore may change as biochar weathers. Here we report how freeze-thaw (F-T) cycling impacts the grain size of pine, mesquite, miscanthus, and sewage waste biochars under two drainage conditions: undrained (all biochars) and a gravity-drained experiment (mesquite biochar only). In the undrained experiment plant biochars showed a decrease in median grain size and a change in grain-size distribution consistent with the flaking off of thin layers from the biochar surface. Biochar grain size distribution changed from unimodal to bimodal, with lower peaks and wider distributions. For plant biochars the median grain size decreased by up to 45.8% and the grain aspect ratio increased by up to 22.4% after 20 F-T cycles. F-T cycling did not change the grain size or aspect ratio of sewage waste biochar. We also observed changes in the skeletal density of biochars (maximum increase of 1.3%), envelope density (maximum decrease of 12.2%), and intraporosity (porosity inside particles, maximum increase of 3.2%). In the drained experiment, mesquite biochar exhibited a decrease of median grain size (up to 4.2%) and no change of aspect ratio after 10 F-T cycles. We also document a positive relationship between grain size decrease and initial water content, suggesting that, biochar properties that increase water content, like high intraporosity and pore connectivity large intrapores, and hydrophilicity, combined with undrained conditions and frequent F-T cycles may increase biochar breakdown. The observed changes in biochar particle size and shape can be expected to alter hydrologic properties, and thus may impact both plant growth and the hydrologic cycle.

  3. An online detection system for aggregate sizes and shapes based on digital image processing

    Science.gov (United States)

    Yang, Jianhong; Chen, Sijia

    2017-02-01

    Traditional aggregate size measuring methods are time-consuming, taxing, and do not deliver online measurements. A new online detection system for determining aggregate size and shape based on a digital camera with a charge-coupled device, and subsequent digital image processing, have been developed to overcome these problems. The system captures images of aggregates while falling and flat lying. Using these data, the particle size and shape distribution can be obtained in real time. Here, we calibrate this method using standard globules. Our experiments show that the maximum particle size distribution error was only 3 wt%, while the maximum particle shape distribution error was only 2 wt% for data derived from falling aggregates, having good dispersion. In contrast, the data for flat-lying aggregates had a maximum particle size distribution error of 12 wt%, and a maximum particle shape distribution error of 10 wt%; their accuracy was clearly lower than for falling aggregates. However, they performed well for single-graded aggregates, and did not require a dispersion device. Our system is low-cost and easy to install. It can successfully achieve online detection of aggregate size and shape with good reliability, and it has great potential for aggregate quality assurance.

  4. Improving a Lecture-Size Molecular Model Set by Repurposing Used Whiteboard Markers

    Science.gov (United States)

    Dragojlovic, Veljko

    2015-01-01

    Preparation of an inexpensive model set from whiteboard markers and either HGS molecular model set or atoms made of wood is described. The model set is relatively easy to prepare and is sufficiently large to be suitable as an instructor set for use in lectures.

  5. Optimal control problems with delay, the maximum principle and necessary conditions

    NARCIS (Netherlands)

    Frankena, J.F.

    1975-01-01

    In this paper we consider a rather general optimal control problem involving ordinary differential equations with delayed arguments and a set of equality and inequality restrictions on state- and control variables. For this problem a maximum principle is given in pointwise form, using variational

  6. DESIGN OF STRUCTURAL ELEMENTS IN THE EVENT OF THE PRE-SET RELIABILITY, REGULAR LOAD AND BEARING CAPACITY DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Tamrazyan Ashot Georgievich

    2012-10-01

    Full Text Available Accurate and adequate description of external influences and of the bearing capacity of the structural material requires the employment of the probability theory methods. In this regard, the characteristic that describes the probability of failure-free operation is required. The characteristic of reliability means that the maximum stress caused by the action of the load will not exceed the bearing capacity. In this paper, the author presents a solution to the problem of calculation of structures, namely, the identification of reliability of pre-set design parameters, in particular, cross-sectional dimensions. If the load distribution pattern is available, employment of the regularities of distributed functions make it possible to find the pattern of distribution of maximum stresses over the structure. Similarly, we can proceed to the design of structures of pre-set rigidity, reliability and stability in the case of regular load distribution. We consider the element of design (a monolithic concrete slab, maximum stress S which depends linearly on load q. Within a pre-set period of time, the probability will not exceed the values according to the Poisson law. The analysis demonstrates that the variability of the bearing capacity produces a stronger effect on relative sizes of cross sections of a slab than the variability of loads. It is therefore particularly important to reduce the coefficient of variation of the load capacity. One of the methods contemplates the truncation of the bearing capacity distribution by pre-culling the construction material.

  7. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  8. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  9. The maximum economic depth of groundwater abstraction for irrigation

    Science.gov (United States)

    Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.

    2017-12-01

    Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of

  10. Variation in clutch size in relation to nest size in birds.

    Science.gov (United States)

    Møller, Anders P; Adriaensen, Frank; Artemyev, Alexandr; Bańbura, Jerzy; Barba, Emilio; Biard, Clotilde; Blondel, Jacques; Bouslama, Zihad; Bouvier, Jean-Charles; Camprodon, Jordi; Cecere, Francesco; Charmantier, Anne; Charter, Motti; Cichoń, Mariusz; Cusimano, Camillo; Czeszczewik, Dorota; Demeyrier, Virginie; Doligez, Blandine; Doutrelant, Claire; Dubiec, Anna; Eens, Marcel; Eeva, Tapio; Faivre, Bruno; Ferns, Peter N; Forsman, Jukka T; García-Del-Rey, Eduardo; Goldshtein, Aya; Goodenough, Anne E; Gosler, Andrew G; Góźdź, Iga; Grégoire, Arnaud; Gustafsson, Lars; Hartley, Ian R; Heeb, Philipp; Hinsley, Shelley A; Isenmann, Paul; Jacob, Staffan; Järvinen, Antero; Juškaitis, Rimvydas; Korpimäki, Erkki; Krams, Indrikis; Laaksonen, Toni; Leclercq, Bernard; Lehikoinen, Esa; Loukola, Olli; Lundberg, Arne; Mainwaring, Mark C; Mänd, Raivo; Massa, Bruno; Mazgajski, Tomasz D; Merino, Santiago; Mitrus, Cezary; Mönkkönen, Mikko; Morales-Fernaz, Judith; Morin, Xavier; Nager, Ruedi G; Nilsson, Jan-Åke; Nilsson, Sven G; Norte, Ana C; Orell, Markku; Perret, Philippe; Pimentel, Carla S; Pinxten, Rianne; Priedniece, Ilze; Quidoz, Marie-Claude; Remeš, Vladimir; Richner, Heinz; Robles, Hugo; Rytkönen, Seppo; Senar, Juan Carlos; Seppänen, Janne T; da Silva, Luís P; Slagsvold, Tore; Solonen, Tapio; Sorace, Alberto; Stenning, Martyn J; Török, János; Tryjanowski, Piotr; van Noordwijk, Arie J; von Numers, Mikael; Walankiewicz, Wiesław; Lambrechts, Marcel M

    2014-09-01

    Nests are structures built to support and protect eggs and/or offspring from predators, parasites, and adverse weather conditions. Nests are mainly constructed prior to egg laying, meaning that parent birds must make decisions about nest site choice and nest building behavior before the start of egg-laying. Parent birds should be selected to choose nest sites and to build optimally sized nests, yet our current understanding of clutch size-nest size relationships is limited to small-scale studies performed over short time periods. Here, we quantified the relationship between clutch size and nest size, using an exhaustive database of 116 slope estimates based on 17,472 nests of 21 species of hole and non-hole-nesting birds. There was a significant, positive relationship between clutch size and the base area of the nest box or the nest, and this relationship did not differ significantly between open nesting and hole-nesting species. The slope of the relationship showed significant intraspecific and interspecific heterogeneity among four species of secondary hole-nesting species, but also among all 116 slope estimates. The estimated relationship between clutch size and nest box base area in study sites with more than a single size of nest box was not significantly different from the relationship using studies with only a single size of nest box. The slope of the relationship between clutch size and nest base area in different species of birds was significantly negatively related to minimum base area, and less so to maximum base area in a given study. These findings are consistent with the hypothesis that bird species have a general reaction norm reflecting the relationship between nest size and clutch size. Further, they suggest that scientists may influence the clutch size decisions of hole-nesting birds through the provisioning of nest boxes of varying sizes.

  11. Step Sizes for Strong Stability Preservation with Downwind-Biased Operators

    KAUST Repository

    Ketcheson, David I.

    2011-01-01

    order accuracy. It is possible to achieve more relaxed step size restrictions in the discretization of hyperbolic PDEs through the use of both upwind- and downwind-biased semidiscretizations. We investigate bounds on the maximum SSP step size for methods

  12. UpSet: Visualization of Intersecting Sets

    Science.gov (United States)

    Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter

    2016-01-01

    Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912

  13. Experimentally reducing clutch size reveals a fixed upper limit to egg size in snakes, evidence from the king ratsnake, Elaphe carinata.

    Science.gov (United States)

    Ji, Xiang; Du, Wei-Guo; Li, Hong; Lin, Long-Hui

    2006-08-01

    Snakes are free of the pelvic girdle's constraint on maximum offspring size, and therefore present an opportunity to investigate the upper limit to offspring size without the limit imposed by the pelvic girdle dimension. We used the king ratsnake (Elaphe carinata) as a model animal to examine whether follicle ablation may result in enlargement of egg size in snakes and, if so, whether there is a fixed upper limit to egg size. Females with small sized yolking follicles were assigned to three manipulated, one sham-manipulated and one control treatments in mid-May, and two, four or six yolking follicles in the manipulated females were then ablated. Females undergoing follicle ablation produced fewer, but larger as well as more elongated, eggs than control females primarily by increasing egg length. This finding suggests that follicle ablation may result in enlargement of egg size in E. carinata. Mean values for egg width remained almost unchanged across the five treatments, suggesting that egg width is more likely to be shaped by the morphological feature of the oviduct. Clutch mass dropped dramatically in four- and six-follicle ablated females. The function describing the relationship between size and number of eggs reveals that egg size increases with decreasing clutch size at an ever-decreasing rate, with the tangent slope of the function for the six-follicle ablation treatment being -0.04. According to the function describing instantaneous variation in tangent slope, the maximum value of tangent slope should converge towards zero. This result provides evidence that there is a fixed upper limit to egg size in E. carinata.

  14. Discretisation Schemes for Level Sets of Planar Gaussian Fields

    Science.gov (United States)

    Beliaev, D.; Muirhead, S.

    2018-01-01

    Smooth random Gaussian functions play an important role in mathematical physics, a main example being the random plane wave model conjectured by Berry to give a universal description of high-energy eigenfunctions of the Laplacian on generic compact manifolds. Our work is motivated by questions about the geometry of such random functions, in particular relating to the structure of their nodal and level sets. We study four discretisation schemes that extract information about level sets of planar Gaussian fields. Each scheme recovers information up to a different level of precision, and each requires a maximum mesh-size in order to be valid with high probability. The first two schemes are generalisations and enhancements of similar schemes that have appeared in the literature (Beffara and Gayet in Publ Math IHES, 2017. https://doi.org/10.1007/s10240-017-0093-0; Mischaikow and Wanner in Ann Appl Probab 17:980-1018, 2007); these give complete topological information about the level sets on either a local or global scale. As an application, we improve the results in Beffara and Gayet (2017) on Russo-Seymour-Welsh estimates for the nodal set of positively-correlated planar Gaussian fields. The third and fourth schemes are, to the best of our knowledge, completely new. The third scheme is specific to the nodal set of the random plane wave, and provides global topological information about the nodal set up to `visible ambiguities'. The fourth scheme gives a way to approximate the mean number of excursion domains of planar Gaussian fields.

  15. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  16. A practical exact maximum compatibility algorithm for reconstruction of recent evolutionary history.

    Science.gov (United States)

    Cherry, Joshua L

    2017-02-23

    Maximum compatibility is a method of phylogenetic reconstruction that is seldom applied to molecular sequences. It may be ideal for certain applications, such as reconstructing phylogenies of closely-related bacteria on the basis of whole-genome sequencing. Here I present an algorithm that rapidly computes phylogenies according to a compatibility criterion. Although based on solutions to the maximum clique problem, this algorithm deals properly with ambiguities in the data. The algorithm is applied to bacterial data sets containing up to nearly 2000 genomes with several thousand variable nucleotide sites. Run times are several seconds or less. Computational experiments show that maximum compatibility is less sensitive than maximum parsimony to the inclusion of nucleotide data that, though derived from actual sequence reads, has been identified as likely to be misleading. Maximum compatibility is a useful tool for certain phylogenetic problems, such as inferring the relationships among closely-related bacteria from whole-genome sequence data. The algorithm presented here rapidly solves fairly large problems of this type, and provides robustness against misleading characters than can pollute large-scale sequencing data.

  17. A comparison of hydraulic architecture in three similarly sized woody species differing in their maximum potential height

    Science.gov (United States)

    Katherine A. McCulloh; Daniel M. Johnson; Joshua Petitmermet; Brandon McNellis; Frederick C. Meinzer; Barbara Lachenbruch; Nathan Phillips

    2015-01-01

    The physiological mechanisms underlying the short maximum height of shrubs are not understood. One possible explanation is that differences in the hydraulic architecture of shrubs compared with co-occurring taller trees prevent the shrubs from growing taller. To explore this hypothesis, we examined various hydraulic parameters, including vessel lumen diameter,...

  18. Size structures sensory hierarchy in ocean life

    DEFF Research Database (Denmark)

    Martens, Erik Andreas; Wadhwa, Navish; Jacobsen, Nis Sand

    2015-01-01

    Life in the ocean is shaped by the trade-off between a need to encounter other organisms for feeding or mating, and to avoid encounters with predators. Avoiding or achieving encounters necessitates an efficient means of collecting the maximum possible information from the surroundings through...... predict the body size limits for various sensory modes, which align very well with size ranges found in literature. The treatise of all ocean life, from unicellular organisms to whales, demonstrates how body size determines available sensing modes, and thereby acts as a major structuring factor of aquatic...

  19. How long do centenarians survive? Life expectancy and maximum lifespan.

    Science.gov (United States)

    Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A

    2017-08-01

    The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  20. The Application of an Army Prospective Payment Model Structured on the Standards Set Forth by the CHAMPUS Maximum Allowable Charges and the Center for Medicare and Medicaid Services: An Academic Approach

    Science.gov (United States)

    2005-04-29

    To) 29-04-2005 Final Report July 2004 to July 2005 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER The appli’eation of an army prospective payment model structured...Z39.18 Prospective Payment Model 1 The Application of an Army Prospective Payment Model Structured on the Standards Set Forth by the CHAMPUS Maximum...Health Care Administration 20060315 090 Prospective Payment Model 2 Acknowledgments I would like to acknowledge my wife, Karen, who allowed me the

  1. Twenty-five years of maximum-entropy principle

    Science.gov (United States)

    Kapur, J. N.

    1983-04-01

    The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.

  2. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  3. Effect of beamlet step-size on IMRT plan quality

    International Nuclear Information System (INIS)

    Zhang Guowei; Jiang Ziping; Shepard, David; Earl, Matt; Yu, Cedric

    2005-01-01

    We have studied the degree to which beamlet step-size impacts the quality of intensity modulated radiation therapy (IMRT) treatment plans. Treatment planning for IMRT begins with the application of a grid that divides each beam's-eye-view of the target into a number of smaller beamlets (pencil beams) of radiation. The total dose is computed as a weighted sum of the dose delivered by the individual beamlets. The width of each beamlet is set to match the width of the corresponding leaf of the multileaf collimator (MLC). The length of each beamlet (beamlet step-size) is parallel to the direction of leaf travel. The beamlet step-size represents the minimum stepping distance of the leaves of the MLC and is typically predetermined by the treatment planning system. This selection imposes an artificial constraint because the leaves of the MLC and the jaws can both move continuously. Removing the constraint can potentially improve the IMRT plan quality. In this study, the optimized results were achieved using an aperture-based inverse planning technique called direct aperture optimization (DAO). We have tested the relationship between pencil beam step-size and plan quality using the American College of Radiology's IMRT test case. For this case, a series of IMRT treatment plans were produced using beamlet step-sizes of 1, 2, 5, and 10 mm. Continuous improvements were seen with each reduction in beamlet step size. The maximum dose to the planning target volume (PTV) was reduced from 134.7% to 121.5% and the mean dose to the organ at risk (OAR) was reduced from 38.5% to 28.2% as the beamlet step-size was reduced from 10 to 1 mm. The smaller pencil beam sizes also led to steeper dose gradients at the junction between the target and the critical structure with gradients of 6.0, 7.6, 8.7, and 9.1 dose%/mm achieved for beamlet step sizes of 10, 5, 2, and 1 mm, respectively

  4. On the quirks of maximum parsimony and likelihood on phylogenetic networks.

    Science.gov (United States)

    Bryant, Christopher; Fischer, Mareike; Linz, Simone; Semple, Charles

    2017-03-21

    Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogenetic networks, which can display such events, are becoming of more and more interest in phylogenetic research. It is therefore necessary to extend concepts like maximum parsimony from phylogenetic trees to networks. Several suggestions for possible extensions can be found in recent literature, for instance the softwired and the hardwired parsimony concepts. In this paper, we analyze the so-called big parsimony problem under these two concepts, i.e. we investigate maximum parsimonious networks and analyze their properties. In particular, we show that finding a softwired maximum parsimony network is possible in polynomial time. We also show that the set of maximum parsimony networks for the hardwired definition always contains at least one phylogenetic tree. Lastly, we investigate some parallels of parsimony to different likelihood concepts on phylogenetic networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Computed Tomographic Window Setting for Bronchial Measurement to Guide Double-Lumen Tube Size.

    Science.gov (United States)

    Seo, Jeong-Hwa; Bae, Jinyoung; Paik, Hyesun; Koo, Chang-Hoon; Bahk, Jae-Hyon

    2018-04-01

    The bronchial diameter measured on computed tomography (CT) can be used to guide double-lumen tube (DLT) sizes objectively. The bronchus is known to be measured most accurately in the so-called bronchial CT window. The authors investigated whether using the bronchial window results in the selection of more appropriately sized DLTs than using the other windows. CT image analysis and prospective randomized study. Tertiary hospital. Adults receiving left-sided DLTs. The authors simulated selection of DLT sizes based on the left bronchial diameters measured in the lung (width 1,500 Hounsfield unit [HU] and level -700 HU), bronchial (1,000 HU and -450 HU), and mediastinal (400 HU and 25 HU) CT windows. Furthermore, patients were randomly assigned to undergo imaging with either the bronchial or mediastinal window to guide DLT sizes. Using the underwater seal technique, the authors assessed whether the DLT was appropriately sized, undersized, or oversized for the patient. On 130 CT images, the bronchial diameter (9.9 ± 1.2 mm v 10.5 ± 1.3 mm v 11.7 ± 1.3 mm) and the selected DLT size were different in the lung, bronchial, and mediastinal windows, respectively (p study, oversized tubes were chosen less frequently in the bronchial window than in the mediastinal window (6/110 v 23/111; risk ratio 0.38; 95% CI 0.19-0.79; p = 0.003). No tubes were undersized after measurements in these two windows. The bronchial measurement in the bronchial window guided more appropriately sized DLTs compared with the lung or mediastinal windows. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Use of electrothermal atomic absorption spectrometry for size profiling of gold and silver nanoparticles.

    Science.gov (United States)

    Panyabut, Teerawat; Sirirat, Natnicha; Siripinyanond, Atitaya

    2018-02-13

    Electrothermal atomic absorption spectrometry (ETAAS) was applied to investigate the atomization behaviors of gold nanoparticles (AuNPs) and silver nanoparticles (AgNPs) in order to relate with particle size information. At various atomization temperatures from 1400 °C to 2200 °C, the time-dependent atomic absorption peak profiles of AuNPs and AgNPs with varying sizes from 5 nm to 100 nm were examined. With increasing particle size, the maximum absorbance was observed at the longer time. The time at maximum absorbance was found to linearly increase with increasing particle size, suggesting that ETAAS can be applied to provide the size information of nanoparticles. With the atomization temperature of 1600 °C, the mixtures of nanoparticles containing two particle sizes, i.e., 5 nm tannic stabilized AuNPs with 60, 80, 100 nm citrate stabilized AuNPs, were investigated and bimodal peaks were observed. The particle size dependent atomization behaviors of nanoparticles show potential application of ETAAS for providing size information of nanoparticles. The calibration plot between the time at maximum absorbance and the particle size was applied to estimate the particle size of in-house synthesized AuNPs and AgNPs and the results obtained were in good agreement with those from flow field-flow fractionation (FlFFF) and transmission electron microscopy (TEM) techniques. Furthermore, the linear relationship between the activation energy and the particle size was observed. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Spatial-temporal characteristics of lightning flash size in a supercell storm

    Science.gov (United States)

    Zhang, Zhixiao; Zheng, Dong; Zhang, Yijun; Lu, Gaopeng

    2017-11-01

    The flash sizes of a supercell storm, in New Mexico on October 5, 2004, are studied using the observations from the New Mexico Lightning Mapping Array and the Albuquerque, New Mexico, Doppler radar (KABX). First, during the temporal evolution of the supercell, the mean flash size is anti-correlated with the flash rate, following a unary power function, with a correlation coefficient of - 0.87. In addition, the mean flash size is linearly correlated with the area of reflectivity > 30 dBZ at 5 km normalized by the flash rate, with a correlation coefficient of 0.88. Second, in the horizontal, flash size increases along the direction from the region near the convection zone to the adjacent forward anvil. The region of minimum flash size usually corresponds to the region of maximum flash initiation and extent density. The horizontal correspondence between the mean flash size and the flash extent density can also be fitted by a unary power function, and the correlation coefficient is > 0.5 in 50% of the radar volume scans. Furthermore, the quality of fit is positively correlated to the convective intensity. Third, in the vertical direction, the height of the maximum flash initiation density is close to the height of maximum flash extent density, but corresponds to the height where the mean flash size is relatively small. In the discussion, the distribution of the small and dense charge regions when and where convection is vigorous in the storm, is deduced to be responsible for the relationship that flash size is temporally and spatially anti-correlated with flash rate and density, and the convective intensity.

  8. The linear sizes tolerances and fits system modernization

    Science.gov (United States)

    Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.

    2018-04-01

    The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.

  9. Optimal Photovoltaic System Sizing of a Hybrid Diesel/PV System

    Directory of Open Access Journals (Sweden)

    Ahmed Belhamadia

    2017-03-01

    Full Text Available This paper presents a cost analysis study of a hybrid diesel and Photovoltaic (PV system in Kuala Terengganu, Malaysia. It first presents the climate conditions of the city followed by the load profile of a 2MVA network; the system was evaluated as a standalone system. Diesel generator rating was considered such that it follows ISO 8528. The maximum size of the PV system was selected such that its penetration would not exceed 25%. Several sizes were considered but the 400kWp system was found to be the most cost efficient. Cost estimation was done using Hybrid Optimization Model for Electric Renewable (HOMER. Based on the simulation results, the climate conditions and the NEC 960, the numbers of the maximum and minimum series modules were suggested as well as the maximum number of the parallel strings.

  10. Setting the renormalization scale in pQCD: Comparisons of the principle of maximum conformality with the sequential extended Brodsky-Lepage-Mackenzie approach

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Hong -Hao [Chongqing Univ., Chongqing (People' s Republic of China); Wu, Xing -Gang [Chongqing Univ., Chongqing (People' s Republic of China); Ma, Yang [Chongqing Univ., Chongqing (People' s Republic of China); Brodsky, Stanley J. [Stanford Univ., Stanford, CA (United States); Mojaza, Matin [KTH Royal Inst. of Technology and Stockholm Univ., Stockholm (Sweden)

    2015-05-26

    A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach to all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R

  11. Near-maximum-power-point-operation (nMPPO) design of photovoltaic power generation system

    Energy Technology Data Exchange (ETDEWEB)

    Huang, B.J.; Sun, F.S.; Ho, R.W. [Department of Mechanical Engineering, National Taiwan University, Taipei 106, Taiwan (China)

    2006-08-15

    The present study proposes a PV system design, called 'near-maximum power-point-operation' (nMPPO) that can maintain the performance very close to PV system with MPPT (maximum-power-point tracking) but eliminate hardware of the MPPT. The concept of nMPPO is to match the design of battery bank voltage V{sub set} with the MPP (maximum-power point) of the PV module based on an analysis using meteorological data. Three design methods are used in the present study to determine the optimal V{sub set}. The analytical results show that nMPPO is feasible and the optimal V{sub set} falls in the range 13.2-15.0V for MSX60 PV module. The long-term performance simulation shows that the overall nMPPO efficiency {eta}{sub nMPPO} is higher than 94%. Two outdoor field tests were carried out in the present study to verify the design of nMPPO. The test results for a single PV module (60Wp) indicate that the nMPPO efficiency {eta}{sub nMPPO} is mostly higher than 93% at various PV temperature T{sub pv}. Another long-term field test of 1kWp PV array using nMPPO shows that the power generation using nMPPO is almost identical with MPPT at various weather conditions and T{sub pv} variation from 24{sup o}C to 70{sup o}C. (author)

  12. Jarzynski equality in the context of maximum path entropy

    Science.gov (United States)

    González, Diego; Davis, Sergio

    2017-06-01

    In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.

  13. The impact of image reconstruction settings on 18F-FDG PET radiomic features. Multi-scanner phantom and patient studies

    International Nuclear Information System (INIS)

    Shiri, Isaac; Abdollahi, Hamid; Rahmim, Arman; Ghaffarian, Pardis; Geramifar, Parham; Bitarafan-Rajabi, Ahmad

    2017-01-01

    The purpose of this study was to investigate the robustness of different PET/CT image radiomic features over a wide range of different reconstruction settings. Phantom and patient studies were conducted, including two PET/CT scanners. Different reconstruction algorithms and parameters including number of sub-iterations, number of subsets, full width at half maximum (FWHM) of Gaussian filter, scan time per bed position and matrix size were studied. Lesions were delineated and one hundred radiomic features were extracted. All radiomics features were categorized based on coefficient of variation (COV). Forty seven percent features showed COV ≤ 5% and 10% of which showed COV > 20%. All geometry based, 44% and 41% of intensity based and texture based features were found as robust respectively. In regard to matrix size, 56% and 6% of all features were found non-robust (COV > 20%) and robust (COV ≤ 5%) respectively. Variability and robustness of PET/CT image radiomics in advanced reconstruction settings is feature-dependent, and different settings have different effects on different features. Radiomic features with low COV can be considered as good candidates for reproducible tumour quantification in multi-center studies. (orig.)

  14. The impact of image reconstruction settings on 18F-FDG PET radiomic features. Multi-scanner phantom and patient studies

    Energy Technology Data Exchange (ETDEWEB)

    Shiri, Isaac; Abdollahi, Hamid [Iran University of Medical Sciences, Department of Medical Physics, School of Medicine, Tehran (Iran, Islamic Republic of); Rahmim, Arman [Johns Hopkins University, Department of Radiology, Baltimore, MD (United States); Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, MD (United States); Ghaffarian, Pardis [Shahid Beheshti University of Medical Sciences, Chronic Respiratory Diseases Research Center, National Research Institute of Tuberculosis and Lung Diseases (NRITLD), Tehran (Iran, Islamic Republic of); Shahid Beheshti University of Medical Sciences, PET/CT and Cyclotron Center, Masih Daneshvari Hospital, Tehran (Iran, Islamic Republic of); Geramifar, Parham [Tehran University of Medical Sciences, Research Center for Nuclear Medicine, Shariati Hospital, Tehran (Iran, Islamic Republic of); Bitarafan-Rajabi, Ahmad [Iran University of Medical Sciences, Department of Medical Physics, School of Medicine, Tehran (Iran, Islamic Republic of); Iran University of Medical Sciences, Department of Nuclear Medicine, Rajaei Cardiovascular, Medical and Research Center, Tehran (Iran, Islamic Republic of)

    2017-11-15

    The purpose of this study was to investigate the robustness of different PET/CT image radiomic features over a wide range of different reconstruction settings. Phantom and patient studies were conducted, including two PET/CT scanners. Different reconstruction algorithms and parameters including number of sub-iterations, number of subsets, full width at half maximum (FWHM) of Gaussian filter, scan time per bed position and matrix size were studied. Lesions were delineated and one hundred radiomic features were extracted. All radiomics features were categorized based on coefficient of variation (COV). Forty seven percent features showed COV ≤ 5% and 10% of which showed COV > 20%. All geometry based, 44% and 41% of intensity based and texture based features were found as robust respectively. In regard to matrix size, 56% and 6% of all features were found non-robust (COV > 20%) and robust (COV ≤ 5%) respectively. Variability and robustness of PET/CT image radiomics in advanced reconstruction settings is feature-dependent, and different settings have different effects on different features. Radiomic features with low COV can be considered as good candidates for reproducible tumour quantification in multi-center studies. (orig.)

  15. Set theory and logic

    CERN Document Server

    Stoll, Robert R

    1979-01-01

    Set Theory and Logic is the result of a course of lectures for advanced undergraduates, developed at Oberlin College for the purpose of introducing students to the conceptual foundations of mathematics. Mathematics, specifically the real number system, is approached as a unity whose operations can be logically ordered through axioms. One of the most complex and essential of modern mathematical innovations, the theory of sets (crucial to quantum mechanics and other sciences), is introduced in a most careful concept manner, aiming for the maximum in clarity and stimulation for further study in

  16. Filter Selection for Optimizing the Spectral Sensitivity of Broadband Multispectral Cameras Based on Maximum Linear Independence.

    Science.gov (United States)

    Li, Sui-Xian

    2018-05-07

    Previous research has shown that the effectiveness of selecting filter sets from among a large set of commercial broadband filters by a vector analysis method based on maximum linear independence (MLI). However, the traditional MLI approach is suboptimal due to the need to predefine the first filter of the selected filter set to be the maximum ℓ₂ norm among all available filters. An exhaustive imaging simulation with every single filter serving as the first filter is conducted to investigate the features of the most competent filter set. From the simulation, the characteristics of the most competent filter set are discovered. Besides minimization of the condition number, the geometric features of the best-performed filter set comprise a distinct transmittance peak along the wavelength axis of the first filter, a generally uniform distribution for the peaks of the filters and substantial overlaps of the transmittance curves of the adjacent filters. Therefore, the best-performed filter sets can be recognized intuitively by simple vector analysis and just a few experimental verifications. A practical two-step framework for selecting optimal filter set is recommended, which guarantees a significant enhancement of the performance of the systems. This work should be useful for optimizing the spectral sensitivity of broadband multispectral imaging sensors.

  17. Filter Selection for Optimizing the Spectral Sensitivity of Broadband Multispectral Cameras Based on Maximum Linear Independence

    Directory of Open Access Journals (Sweden)

    Sui-Xian Li

    2018-05-01

    Full Text Available Previous research has shown that the effectiveness of selecting filter sets from among a large set of commercial broadband filters by a vector analysis method based on maximum linear independence (MLI. However, the traditional MLI approach is suboptimal due to the need to predefine the first filter of the selected filter set to be the maximum ℓ2 norm among all available filters. An exhaustive imaging simulation with every single filter serving as the first filter is conducted to investigate the features of the most competent filter set. From the simulation, the characteristics of the most competent filter set are discovered. Besides minimization of the condition number, the geometric features of the best-performed filter set comprise a distinct transmittance peak along the wavelength axis of the first filter, a generally uniform distribution for the peaks of the filters and substantial overlaps of the transmittance curves of the adjacent filters. Therefore, the best-performed filter sets can be recognized intuitively by simple vector analysis and just a few experimental verifications. A practical two-step framework for selecting optimal filter set is recommended, which guarantees a significant enhancement of the performance of the systems. This work should be useful for optimizing the spectral sensitivity of broadband multispectral imaging sensors.

  18. The calculation of maximum permissible exposure levels for laser radiation

    International Nuclear Information System (INIS)

    Tozer, B.A.

    1979-01-01

    The maximum permissible exposure data of the revised standard BS 4803 are presented as a set of decision charts which ensure that the user automatically takes into account such details as pulse length and pulse pattern, limiting angular subtense, combinations of multiple wavelength and/or multiple pulse lengths, etc. The two decision charts given are for the calculation of radiation hazards to skin and eye respectively. (author)

  19. Sugar export limits size of conifer needles

    DEFF Research Database (Denmark)

    Rademaker, Hanna; Zwieniecki, Maciej A.; Bohr, Tomas

    2017-01-01

    Plant leaf size varies by more than three orders of magnitude, from a few millimeters to over one meter. Conifer leaves, however, are relatively short and the majority of needles are no longer than 6 cm. The reason for the strong confinement of the trait-space is unknown. We show that sugars...... does not contribute to sugar flow. Remarkably, we find that the size of the active part does not scale with needle length. We predict a single maximum needle size of 5 cm, in accord with data from 519 conifer species. This could help rationalize the recent observation that conifers have significantly...

  20. Maximum likelihood estimation of the position of a radiating source in a waveguide

    International Nuclear Information System (INIS)

    Hinich, M.J.

    1979-01-01

    An array of sensors is receiving radiation from a source of interest. The source and the array are in a one- or two-dimensional waveguide. The maximum-likelihood estimators of the coordinates of the source are analyzed under the assumptions that the noise field is Gaussian. The Cramer-Rao lower bound is of the order of the number of modes which define the source excitation function. The results show that the accuracy of the maximum likelihood estimator of source depth using a vertical array in a infinite horizontal waveguide (such as the ocean) is limited by the number of modes detected by the array regardless of the array size

  1. The Maximum standardized uptake value is more reliable than size measurement in early follow-up to evaluate potential pulmonary malignancies following radiofrequency ablation.

    Science.gov (United States)

    Alafate, Aierken; Shinya, Takayoshi; Okumura, Yoshihiro; Sato, Shuhei; Hiraki, Takao; Ishii, Hiroaki; Gobara, Hideo; Kato, Katsuya; Fujiwara, Toshiyoshi; Miyoshi, Shinichiro; Kaji, Mitsumasa; Kanazawa, Susumu

    2013-01-01

    We retrospectively evaluated the accumulation of fluorodeoxy glucose (FDG) in pulmonary malignancies without local recurrence during 2-year follow-up on positron emission tomography (PET)/computed tomography (CT) after radiofrequency ablation (RFA). Thirty tumors in 25 patients were studied (10 non-small cell lung cancers;20 pulmonary metastatic tumors). PET/CT was performed before RFA, 3 months after RFA, and 6 months after RFA. We assessed the FDG accumulation with the maximum standardized uptake value (SUVmax) compared with the diameters of the lesions. The SUVmax had a decreasing tendency in the first 6 months and, at 6 months post-ablation, FDG accumulation was less affected by inflammatory changes than at 3 months post-RFA. The diameter of the ablated lesion exceeded that of the initial tumor at 3 months post-RFA and shrank to pre-ablation dimensions by 6 months post-RFA. SUVmax was more reliable than the size measurements by CT in the first 6 months after RFA, and PET/CT at 6 months post-RFA may be more appropriate for the assessment of FDG accumulation than that at 3 months post-RFA.

  2. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan

    2011-10-10

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  3. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan; Huang, Jianhua Z.

    2011-01-01

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  4. Mid-depth temperature maximum in an estuarine lake

    Science.gov (United States)

    Stepanenko, V. M.; Repina, I. A.; Artamonov, A. Yu; Gorin, S. L.; Lykossov, V. N.; Kulyamin, D. V.

    2018-03-01

    The mid-depth temperature maximum (TeM) was measured in an estuarine Bol’shoi Vilyui Lake (Kamchatka peninsula, Russia) in summer 2015. We applied 1D k-ɛ model LAKE to the case, and found it successfully simulating the phenomenon. We argue that the main prerequisite for mid-depth TeM development is a salinity increase below the freshwater mixed layer, sharp enough in order to increase the temperature with depth not to cause convective mixing and double diffusion there. Given that this condition is satisfied, the TeM magnitude is controlled by physical factors which we identified as: radiation absorption below the mixed layer, mixed-layer temperature dynamics, vertical heat conduction and water-sediments heat exchange. In addition to these, we formulate the mechanism of temperature maximum ‘pumping’, resulting from the phase shift between diurnal cycles of mixed-layer depth and temperature maximum magnitude. Based on the LAKE model results we quantify the contribution of the above listed mechanisms and find their individual significance highly sensitive to water turbidity. Relying on physical mechanisms identified we define environmental conditions favouring the summertime TeM development in salinity-stratified lakes as: small-mixed layer depth (roughly, ~wind and cloudless weather. We exemplify the effect of mixed-layer depth on TeM by a set of selected lakes.

  5. The definition of basic parameters of the set of small-sized equipment for preparation of dry mortar for various applications

    Directory of Open Access Journals (Sweden)

    Emelyanova Inga

    2017-01-01

    Full Text Available Based on the conducted information retrieval and review of the scientific literature, unsolved issues have been identified in the process of preparation of dry construction mixtures in the conditions of a construction site. The constructions of existing technological complexes for the production of dry construction mixtures are considered and their main drawbacks are identified in terms of application in the conditions of the construction site. On the basis of the conducted research, the designs of technological sets of small-sized equipment for the preparation of dry construction mixtures in the construction site are proposed. It is found out that the basis for creating the proposed technological kits are new designs of concrete mixers operating in cascade mode. A technique for calculating the main parameters of technological sets of equipment is proposed, depending on the use of the base machine of the kit.

  6. Blocking sets in Desarguesian planes

    NARCIS (Netherlands)

    Blokhuis, A.; Miklós, D.; Sós, V.T.; Szönyi, T.

    1996-01-01

    We survey recent results concerning the size of blocking sets in desarguesian projective and affine planes, and implications of these results and the technique to prove them, to related problemis, such as the size of maximal partial spreads, small complete arcs, small strong representative systems

  7. Sizing and control of trailing edge flaps on a smart rotor for maximum power generation in low fatigue wind regimes

    DEFF Research Database (Denmark)

    Smit, Jeroen; Bernhammer, Lars O.; Navalkar, Sachin T.

    2016-01-01

    to fatigue damage have been identified. In these regions, the turbine energy output can be increased by deflecting the trailing edge (TE) flap in order to track the maximum power coefficient as a function of local, instantaneous speed ratios. For this purpose, the TE flap configuration for maximum power...... generation has been using blade element momentum theory. As a first step, the operation in non-uniform wind field conditions was analysed. Firstly, the deterministic fluctuation in local tip speed ratio due to wind shear was evaluated. The second effect is associated with time delays in adapting the rotor...

  8. Distribution of phytoplankton groups within the deep chlorophyll maximum

    KAUST Repository

    Latasa, Mikel

    2016-11-01

    The fine vertical distribution of phytoplankton groups within the deep chlorophyll maximum (DCM) was studied in the NE Atlantic during summer stratification. A simple but unconventional sampling strategy allowed examining the vertical structure with ca. 2 m resolution. The distribution of Prochlorococcus, Synechococcus, chlorophytes, pelagophytes, small prymnesiophytes, coccolithophores, diatoms, and dinoflagellates was investigated with a combination of pigment-markers, flow cytometry and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer. The more symmetrical distribution of chlorophyll than cells around the DCM peak was due to the increase of pigment per cell with depth. We found a vertical alignment of phytoplankton groups within the DCM layer indicating preferences for different ecological niches in a layer with strong gradients of light and nutrients. Prochlorococcus occupied the shallowest and diatoms the deepest layers. Dinoflagellates, Synechococcus and small prymnesiophytes preferred shallow DCM layers, and coccolithophores, chlorophytes and pelagophytes showed a preference for deep layers. Cell size within groups changed with depth in a pattern related to their mean size: the cell volume of the smallest group increased the most with depth while the cell volume of the largest group decreased the most. The vertical alignment of phytoplankton groups confirms that the DCM is not a homogeneous entity and indicates groups’ preferences for different ecological niches within this layer.

  9. Normal Indian pituitary gland size on MR imaging

    International Nuclear Information System (INIS)

    Gupta, A.K.; Jena, A.N.; Gulati, P.K.; Marwah, R.K.; Tripathi, R.P.; Sharma, R.K.; Khanna, C.M.

    1994-01-01

    The size of the pituitary gland was measured in 294 subjects, who had no known pituitary or hypothalamic disorders. Mid sagittal TIW images showing maximum dimensions of the pituitary gland, were used for measurement of the height in each age and sex group. The mean pituitary height of all the subjects in men was 5.3 mm (SD=0.9 mm), whereas in women, the mean height was 5.9 mm (SD = 1.2 mm). Beyond 10 years of age, the pituitary height measured was greater in women than in men. The gland height showed a gradual decrease with increasing age after the age of 30 years in both men and women except in the age group of 51-60 years, which showed paradoxical increase in size. The minimum gland height found in this study was 2.5 mm and the maximum, 8.8 mm. The study presents a demographic profile of pituitary gland size in north Indian subjects as measured on MR images. (author). 6 refs., 2 tabs., 1 fig

  10. Collimator setting optimization in intensity modulated radiotherapy

    International Nuclear Information System (INIS)

    Williams, M.; Hoban, P.

    2001-01-01

    Full text: The aim of this study was to investigate the role of collimator angle and bixel size settings in IMRT when using the step and shoot method of delivery. Of particular interest is minimisation of the total monitor units delivered. Beam intensity maps with bixel size 10 x 10 mm were segmented into MLC leaf sequences and the collimator angle optimised to minimise the total number of MU's. The monitor units were estimated from the maximum sum of positive-gradient intensity changes along the direction of leaf motion. To investigate the use of low resolution maps at optimum collimator angles, several high resolution maps with bixel size 5 x 5 mm were generated. These were resampled into bixel sizes, 5 x 10 mm and 10 x 10 mm and the collimator angle optimised to minimise the RMS error between the original and resampled map. Finally, a clinical IMRT case was investigated with the collimator angle optimised. Both the dose distribution and dose-volume histograms were compared between the standard IMRT plan and the optimised plan. For the 10 x 10 mm bixel maps there was a variation of 5% - 40% in monitor units at the different collimator angles. The maps with a high degree of radial symmetry showed little variation. For the resampled 5 x 5 mm maps, a small RMS error was achievable with a 5 x 10 mm bixel size at particular collimator positions. This was most noticeable for maps with an elongated intensity distribution. A comparison between the 5 x 5 mm bixel plan and the 5 x 10 mm showed no significant difference in dose distribution. The monitor units required to deliver an intensity modulated field can be reduced by rotating the collimator and aligning the direction of leaf motion with the axis of the fluence map that has the least intensity. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine

  11. Missing portion sizes in FFQ

    DEFF Research Database (Denmark)

    Køster-Rasmussen, Rasmus; Siersma, Volkert Dirk; Halldorson, Thorhallur I.

    2015-01-01

    -nearest neighbours (KNN) were compared with a reference based on self-reported portion sizes (quantified by a photographic food atlas embedded in the FFQ). Setting: The Danish Health Examination Survey 2007–2008. Subjects: The study included 3728 adults with complete portion size data. Results: Compared...

  12. Sensitivity of C-Band Polarimetric Radar-Based Drop Size Distribution Measurements to Maximum Diameter Assumptions

    Science.gov (United States)

    Carey, Lawrence D.; Petersen, Walter A.

    2011-01-01

    The estimation of rain drop size distribution (DSD) parameters from polarimetric radar observations is accomplished by first establishing a relationship between differential reflectivity (Z(sub dr)) and the central tendency of the rain DSD such as the median volume diameter (D0). Since Z(sub dr) does not provide a direct measurement of DSD central tendency, the relationship is typically derived empirically from rain drop and radar scattering models (e.g., D0 = F[Z (sub dr)] ). Past studies have explored the general sensitivity of these models to temperature, radar wavelength, the drop shape vs. size relation, and DSD variability. Much progress has been made in recent years in measuring the drop shape and DSD variability using surface-based disdrometers, such as the 2D Video disdrometer (2DVD), and documenting their impact on polarimetric radar techniques. In addition to measuring drop shape, another advantage of the 2DVD over earlier impact type disdrometers is its ability to resolve drop diameters in excess of 5 mm. Despite this improvement, the sampling limitations of a disdrometer, including the 2DVD, make it very difficult to adequately measure the maximum drop diameter (D(sub max)) present in a typical radar resolution volume. As a result, D(sub max) must still be assumed in the drop and radar models from which D0 = F[Z(sub dr)] is derived. Since scattering resonance at C-band wavelengths begins to occur in drop diameters larger than about 5 mm, modeled C-band radar parameters, particularly Z(sub dr), can be sensitive to D(sub max) assumptions. In past C-band radar studies, a variety of D(sub max) assumptions have been made, including the actual disdrometer estimate of D(sub max) during a typical sampling period (e.g., 1-3 minutes), D(sub max) = C (where C is constant at values from 5 to 8 mm), and D(sub max) = M*D0 (where the constant multiple, M, is fixed at values ranging from 2.5 to 3.5). The overall objective of this NASA Global Precipitation Measurement

  13. Size matter!

    DEFF Research Database (Denmark)

    Hansen, Pelle Guldborg; Jespersen, Andreas Maaløe; Skov, Laurits Rhoden

    2015-01-01

    trash bags according to size of plates and weighed in bulk. Results Those eating from smaller plates (n=145) left significantly less food to waste (aver. 14,8g) than participants eating from standard plates (n=75) (aver. 20g) amounting to a reduction of 25,8%. Conclusions Our field experiment tests...... the hypothesis that a decrease in the size of food plates may lead to significant reductions in food waste from buffets. It supports and extends the set of circumstances in which a recent experiment found that reduced dinner plates in a hotel chain lead to reduced quantities of leftovers....

  14. Beauty, body size and wages: Evidence from a unique data set.

    Science.gov (United States)

    Oreffice, Sonia; Quintana-Domeque, Climent

    2016-09-01

    We analyze how attractiveness rated at the start of the interview in the German General Social Survey is related to weight, height, and body mass index (BMI), separately by gender and accounting for interviewers' characteristics or fixed effects. We show that height, weight, and BMI all strongly contribute to male and female attractiveness when attractiveness is rated by opposite-sex interviewers, and that anthropometric characteristics are irrelevant to male interviewers when assessing male attractiveness. We also estimate whether, controlling for beauty, body size measures are related to hourly wages. We find that anthropometric attributes play a significant role in wage regressions in addition to attractiveness, showing that body size cannot be dismissed as a simple component of beauty. Our findings are robust to controlling for health status and accounting for selection into working. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Evolution of the earliest horses driven by climate change in the Paleocene-Eocene Thermal Maximum.

    Science.gov (United States)

    Secord, Ross; Bloch, Jonathan I; Chester, Stephen G B; Boyer, Doug M; Wood, Aaron R; Wing, Scott L; Kraus, Mary J; McInerney, Francesca A; Krigbaum, John

    2012-02-24

    Body size plays a critical role in mammalian ecology and physiology. Previous research has shown that many mammals became smaller during the Paleocene-Eocene Thermal Maximum (PETM), but the timing and magnitude of that change relative to climate change have been unclear. A high-resolution record of continental climate and equid body size change shows a directional size decrease of ~30% over the first ~130,000 years of the PETM, followed by a ~76% increase in the recovery phase of the PETM. These size changes are negatively correlated with temperature inferred from oxygen isotopes in mammal teeth and were probably driven by shifts in temperature and possibly high atmospheric CO(2) concentrations. These findings could be important for understanding mammalian evolutionary responses to future global warming.

  16. Influence of cervical preflaring on apical file size determination.

    Science.gov (United States)

    Pecora, J D; Capelli, A; Guerisoli, D M Z; Spanó, J C E; Estrela, C

    2005-07-01

    To investigate the influence of cervical preflaring with different instruments (Gates-Glidden drills, Quantec Flare series instruments and LA Axxess burs) on the first file that binds at working length (WL) in maxillary central incisors. Forty human maxillary central incisors with complete root formation were used. After standard access cavities, a size 06 K-file was inserted into each canal until the apical foramen was reached. The WL was set 1 mm short of the apical foramen. Group 1 received the initial apical instrument without previous preflaring of the cervical and middle thirds of the root canal. Group 2 had the cervical and middle portion of the root canals enlarged with Gates-Glidden drills sizes 90, 110 and 130. Group 3 had the cervical and middle thirds of the root canals enlarged with nickel-titanium Quantec Flare series instruments. Titanium-nitrite treated, stainless steel LA Axxess burs were used for preflaring the cervical and middle portions of root canals from group 4. Each canal was sized using manual K-files, starting with size 08 files with passive movements until the WL was reached. File sizes were increased until a binding sensation was felt at the WL, and the instrument size was recorded for each tooth. The apical region was then observed under a stereoscopic magnifier, images were recorded digitally and the differences between root canal and maximum file diameters were evaluated for each sample. Significant differences were found between experimental groups regarding anatomical diameter at the WL and the first file to bind in the canal (P Flare instruments were ranked in an intermediary position, with no statistically significant differences between them (0.093 mm average). The instrument binding technique for determining anatomical diameter at WL is not precise. Preflaring of the cervical and middle thirds of the root canal improved anatomical diameter determination; the instrument used for preflaring played a major role in determining the

  17. Wobbling and LSF-based maximum likelihood expectation maximization reconstruction for wobbling PET

    International Nuclear Information System (INIS)

    Kim, Hang-Keun; Son, Young-Don; Kwon, Dae-Hyuk; Joo, Yohan; Cho, Zang-Hee

    2016-01-01

    Positron emission tomography (PET) is a widely used imaging modality; however, the PET spatial resolution is not yet satisfactory for precise anatomical localization of molecular activities. Detector size is the most important factor because it determines the intrinsic resolution, which is approximately half of the detector size and determines the ultimate PET resolution. Detector size, however, cannot be made too small because both the decreased detection efficiency and the increased septal penetration effect degrade the image quality. A wobbling and line spread function (LSF)-based maximum likelihood expectation maximization (WL-MLEM) algorithm, which combined the MLEM iterative reconstruction algorithm with wobbled sampling and LSF-based deconvolution using the system matrix, was proposed for improving the spatial resolution of PET without reducing the scintillator or detector size. The new algorithm was evaluated using a simulation, and its performance was compared with that of the existing algorithms, such as conventional MLEM and LSF-based MLEM. Simulations demonstrated that the WL-MLEM algorithm yielded higher spatial resolution and image quality than the existing algorithms. The WL-MLEM algorithm with wobbling PET yielded substantially improved resolution compared with conventional algorithms with stationary PET. The algorithm can be easily extended to other iterative reconstruction algorithms, such as maximum a priori (MAP) and ordered subset expectation maximization (OSEM). The WL-MLEM algorithm with wobbling PET may offer improvements in both sensitivity and resolution, the two most sought-after features in PET design. - Highlights: • This paper proposed WL-MLEM algorithm for PET and demonstrated its performance. • WL-MLEM algorithm effectively combined wobbling and line spread function based MLEM. • WL-MLEM provided improvements in the spatial resolution and the PET image quality. • WL-MLEM can be easily extended to the other iterative

  18. Type Ibn Supernovae Show Photometric Homogeneity and Spectral Diversity at Maximum Light

    Energy Technology Data Exchange (ETDEWEB)

    Hosseinzadeh, Griffin; Arcavi, Iair; McCully, Curtis; Howell, D. Andrew [Las Cumbres Observatory, 6740 Cortona Dr Ste 102, Goleta, CA 93117-5575 (United States); Valenti, Stefano [Department of Physics, University of California, 1 Shields Ave, Davis, CA 95616-5270 (United States); Johansson, Joel [Department of Particle Physics and Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel); Sollerman, Jesper; Fremling, Christoffer; Karamehmetoglu, Emir [Oskar Klein Centre, Department of Astronomy, Stockholm University, Albanova University Centre, SE-106 91 Stockholm (Sweden); Pastorello, Andrea; Benetti, Stefano; Elias-Rosa, Nancy [INAF-Osservatorio Astronomico di Padova, Vicolo dell’Osservatorio 5, I-35122 Padova (Italy); Cao, Yi; Duggan, Gina; Horesh, Assaf [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, Mail Code 249-17, Pasadena, CA 91125 (United States); Cenko, S. Bradley [Astrophysics Science Division, NASA Goddard Space Flight Center, Mail Code 661, Greenbelt, MD 20771 (United States); Clubb, Kelsey I.; Filippenko, Alexei V. [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States); Corsi, Alessandra [Department of Physics, Texas Tech University, Box 41051, Lubbock, TX 79409-1051 (United States); Fox, Ori D., E-mail: griffin@lco.global [Space Telescope Science Institute, 3700 San Martin Dr, Baltimore, MD 21218 (United States); and others

    2017-02-20

    Type Ibn supernovae (SNe) are a small yet intriguing class of explosions whose spectra are characterized by low-velocity helium emission lines with little to no evidence for hydrogen. The prevailing theory has been that these are the core-collapse explosions of very massive stars embedded in helium-rich circumstellar material (CSM). We report optical observations of six new SNe Ibn: PTF11rfh, PTF12ldy, iPTF14aki, iPTF15ul, SN 2015G, and iPTF15akq. This brings the sample size of such objects in the literature to 22. We also report new data, including a near-infrared spectrum, on the Type Ibn SN 2015U. In order to characterize the class as a whole, we analyze the photometric and spectroscopic properties of the full Type Ibn sample. We find that, despite the expectation that CSM interaction would generate a heterogeneous set of light curves, as seen in SNe IIn, most Type Ibn light curves are quite similar in shape, declining at rates around 0.1 mag day{sup −1} during the first month after maximum light, with a few significant exceptions. Early spectra of SNe Ibn come in at least two varieties, one that shows narrow P Cygni lines and another dominated by broader emission lines, both around maximum light, which may be an indication of differences in the state of the progenitor system at the time of explosion. Alternatively, the spectral diversity could arise from viewing-angle effects or merely from a lack of early spectroscopic coverage. Together, the relative light curve homogeneity and narrow spectral features suggest that the CSM consists of a spatially confined shell of helium surrounded by a less dense extended wind.

  19. Generalized uncertainty principle and the maximum mass of ideal white dwarfs

    Energy Technology Data Exchange (ETDEWEB)

    Rashidi, Reza, E-mail: reza.rashidi@srttu.edu

    2016-11-15

    The effects of a generalized uncertainty principle on the structure of an ideal white dwarf star is investigated. The equation describing the equilibrium configuration of the star is a generalized form of the Lane–Emden equation. It is proved that the star always has a finite size. It is then argued that the maximum mass of such an ideal white dwarf tends to infinity, as opposed to the conventional case where it has a finite value.

  20. Peyronie's Reconstruction for Maximum Length and Girth Gain: Geometrical Principles

    Directory of Open Access Journals (Sweden)

    Paulo H. Egydio

    2008-01-01

    Full Text Available Peyronie's disease has been associated with penile shortening and some degree of erectile dysfunction. Surgical reconstruction should be based on giving a functional penis, that is, rectifying the penis with rigidity enough to make the sexual intercourse. The procedure should be discussed preoperatively in terms of length and girth reconstruction in order to improve patient satisfaction. The tunical reconstruction for maximum penile length and girth restoration should be based on the maximum length of the dissected neurovascular bundle possible and the application of geometrical principles to define the precise site and size of tunical incision and grafting procedure. As penile rectification and rigidity are required to achieve complete functional restoration of the penis and 20 to 54% of patients experience associated erectile dysfunction, penile straightening alone may not be enough to provide complete functional restoration. Therefore, phosphodiesterase inhibitors, self-injection, or penile prosthesis may need to be added in some cases.

  1. Performance of penalized maximum likelihood in estimation of genetic covariances matrices

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2011-11-01

    Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should

  2. Measurement of in-bore side loads and comparison to first maximum yaw

    Directory of Open Access Journals (Sweden)

    Donald E. Carlucci

    2016-04-01

    Full Text Available In-bore yaw of a projectile in a gun tube has been shown to result in range loss if the yaw is significant. An attempt was made to determine if relationships between in-bore yaw and projectile First Maximum Yaw (FMY were observable. Experiments were conducted in which pressure transducers were mounted near the muzzle of a 155 mm cannon in three sets of four. Each set formed a cruciform pattern to obtain a differential pressure across the projectile. These data were then integrated to form a picture of what the overall pressure distribution was along the side of the projectile. The pressure distribution was used to determine a magnitude and direction of the overturning moment acting on the projectile. This moment and its resulting angular acceleration were then compared to the actual first maximum yaw observed in the test. The degree of correlation was examined using various statistical techniques. Overall uncertainty in the projectile dynamics was between 20% and 40% of the mean values of FMY.

  3. Adaptive Mean Queue Size and Its Rate of Change: Queue Management with Random Dropping

    OpenAIRE

    Karmeshu; Patel, Sanjeev; Bhatnagar, Shalabh

    2016-01-01

    The Random early detection (RED) active queue management (AQM) scheme uses the average queue size to calculate the dropping probability in terms of minimum and maximum thresholds. The effect of heavy load enhances the frequency of crossing the maximum threshold value resulting in frequent dropping of the packets. An adaptive queue management with random dropping (AQMRD) algorithm is proposed which incorporates information not just about the average queue size but also the rate of change of th...

  4. On sets of vectors of a finite vector space in which every subset of basis size is a basis II

    OpenAIRE

    Ball, Simeon; De Beule, Jan

    2012-01-01

    This article contains a proof of the MDS conjecture for k a parts per thousand currency sign 2p - 2. That is, that if S is a set of vectors of in which every subset of S of size k is a basis, where q = p (h) , p is prime and q is not and k a parts per thousand currency sign 2p - 2, then |S| a parts per thousand currency sign q + 1. It also contains a short proof of the same fact for k a parts per thousand currency sign p, for all q.

  5. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  6. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  7. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  8. The effect of magnet size on the levitation force and attractive force of single-domain YBCO bulk superconductors

    International Nuclear Information System (INIS)

    Yang, W M; Chao, X X; Bian, X B; Liu, P; Feng, Y; Zhang, P X; Zhou, L

    2003-01-01

    The levitation forces between a single-domain YBCO bulk and several magnets of different sizes have been measured at 77 K to investigate the effect of the magnet size on the levitation force. It is found that the levitation force reaches a largest (peak) value when the size of the magnet approaches that of the superconductor when the other conditions are fixed. The absolute maximum attractive force (in the field-cooled state) increases with the increasing of the magnet size, and is saturated when the magnet size approaches that of the superconductor. The maximum attractive force in the field-cooled (FC) state is much higher than that of the maximum attractive force in the zero field-cooled (ZFC) state. The results indicate that the effects of magnetic field distribution on the levitation force have to be considered during the designing and manufacturing of superconducting devices

  9. Application of the maximum entropy method to profile analysis

    International Nuclear Information System (INIS)

    Armstrong, N.; Kalceff, W.; Cline, J.P.

    1999-01-01

    Full text: A maximum entropy (MaxEnt) method for analysing crystallite size- and strain-induced x-ray profile broadening is presented. This method treats the problems of determining the specimen profile, crystallite size distribution, and strain distribution in a general way by considering them as inverse problems. A common difficulty faced by many experimenters is their inability to determine a well-conditioned solution of the integral equation, which preserves the positivity of the profile or distribution. We show that the MaxEnt method overcomes this problem, while also enabling a priori information, in the form of a model, to be introduced into it. Additionally, we demonstrate that the method is fully quantitative, in that uncertainties in the solution profile or solution distribution can be determined and used in subsequent calculations, including mean particle sizes and rms strain. An outline of the MaxEnt method is presented for the specific problems of determining the specimen profile and crystallite or strain distributions for the correspondingly broadened profiles. This approach offers an alternative to standard methods such as those of Williamson-Hall and Warren-Averbach. An application of the MaxEnt method is demonstrated in the analysis of alumina size-broadened diffraction data (from NIST, Gaithersburg). It is used to determine the specimen profile and column-length distribution of the scattering domains. Finally, these results are compared with the corresponding Williamson-Hall and Warren-Averbach analyses. Copyright (1999) Australian X-ray Analytical Association Inc

  10. States of maximum polarization for a quantum light field and states of a maximum sensitivity in quantum interferometry

    International Nuclear Information System (INIS)

    Peřinová, Vlasta; Lukš, Antonín

    2015-01-01

    The SU(2) group is used in two different fields of quantum optics, the quantum polarization and quantum interferometry. Quantum degrees of polarization may be based on distances of a polarization state from the set of unpolarized states. The maximum polarization is achieved in the case where the state is pure and then the distribution of the photon-number sums is optimized. In quantum interferometry, the SU(2) intelligent states have also the property that the Fisher measure of information is equal to the inverse minimum detectable phase shift on the usual simplifying condition. Previously, the optimization of the Fisher information under a constraint was studied. Now, in the framework of constraint optimization, states similar to the SU(2) intelligent states are treated. (paper)

  11. TREDRA, Minimal Cut Sets Fault Tree Plot Program

    International Nuclear Information System (INIS)

    Fussell, J.B.

    1983-01-01

    1 - Description of problem or function: TREDRA is a computer program for drafting report-quality fault trees. The input to TREDRA is similar to input for standard computer programs that find minimal cut sets from fault trees. Output includes fault tree plots containing all standard fault tree logic and event symbols, gate and event labels, and an output description for each event in the fault tree. TREDRA contains the following features: a variety of program options that allow flexibility in the program output; capability for automatic pagination of the output fault tree, when necessary; input groups which allow labeling of gates, events, and their output descriptions; a symbol library which includes standard fault tree symbols plus several less frequently used symbols; user control of character size and overall plot size; and extensive input error checking and diagnostic oriented output. 2 - Method of solution: Fault trees are generated by user-supplied control parameters and a coded description of the fault tree structure consisting of the name of each gate, the gate type, the number of inputs to the gate, and the names of these inputs. 3 - Restrictions on the complexity of the problem: TREDRA can produce fault trees with a minimum of 3 and a maximum of 56 levels. The width of each level may range from 3 to 37. A total of 50 transfers is allowed during pagination

  12. Prediction of the Maximum Number of Repetitions and Repetitions in Reserve From Barbell Velocity.

    Science.gov (United States)

    García-Ramos, Amador; Torrejón, Alejandro; Feriche, Belén; Morales-Artacho, Antonio J; Pérez-Castilla, Alejandro; Padial, Paulino; Haff, Guy Gregory

    2018-03-01

    To provide 2 general equations to estimate the maximum possible number of repetitions (XRM) from the mean velocity (MV) of the barbell and the MV associated with a given number of repetitions in reserve, as well as to determine the between-sessions reliability of the MV associated with each XRM. After determination of the bench-press 1-repetition maximum (1RM; 1.15 ± 0.21 kg/kg body mass), 21 men (age 23.0 ± 2.7 y, body mass 72.7 ± 8.3 kg, body height 1.77 ± 0.07 m) completed 4 sets of as many repetitions as possible against relative loads of 60%1RM, 70%1RM, 80%1RM, and 90%1RM over 2 separate sessions. The different loads were tested in a randomized order with 10 min of rest between them. All repetitions were performed at the maximum intended velocity. Both the general equation to predict the XRM from the fastest MV of the set (CV = 15.8-18.5%) and the general equation to predict MV associated with a given number of repetitions in reserve (CV = 14.6-28.8%) failed to provide data with acceptable between-subjects variability. However, a strong relationship (median r 2  = .984) and acceptable reliability (CV  .85) were observed between the fastest MV of the set and the XRM when considering individual data. These results indicate that generalized group equations are not acceptable methods for estimating the XRM-MV relationship or the number of repetitions in reserve. When attempting to estimate the XRM-MV relationship, one must use individualized relationships to objectively estimate the exact number of repetitions that can be performed in a training set.

  13. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio; Genton, Marc G.; Yokota, Rio

    2015-01-01

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic

  14. PENGARUH LABA BERSIH, ARUS KAS OPERASIONAL, INVESTMENT OPPORTUNITY SET DAN FIRM SIZE TERHADAP DIVIDEN KAS (Studi Kasus Pada Perusahaan Manufaktur di Bursa Efek Indonesia Tahun 2010 – 2012

    Directory of Open Access Journals (Sweden)

    Luluk Muhimatul Ifada

    2014-12-01

    Full Text Available This study aimed to investigate the influence of net profit, operating cash flow, investment opportunity set, and firm size on cash dividend. The sample of this research is manufacturing companies list on Indonesia Stock Exchange (BEI in period 2010-2012 published by www.idx.co.idand posted at Indonesia Capital Market Directory (ICMD. There are 28 acquired companies that meet the criteria specified. The analysis method use multiple regression analysis with level of significance 5%. The conclusion of this research based on  t-statistic result.The result of this research proved that variable net profit have significantly positive influence on cash dividend. Operating cash flow have significantly positive influence on cash dividend. Investment opportunity set hasn’t significantly and have negative correlation influence towards cash dividend. Firm size hasn’t significantly but have positive correlation influence toward cash dividend.Penelitian   ini   bertujuan   untuk   menganalisis   pengaruh   laba   bersih, arus kas operasi, investment opportunity set, dan firm size terhadap dividen kas. Sampel penelitian ini adalah perusahaan manufaktur yang terdaftar di Bursa EfekIndonesia periode tahun 2010-2012. Data yang digunakan adalah laporan keuangan dari masing-masing perusahaan sampel, yang dipublikasikan melalui website www.idx.co.id. dan termuat dalam Indonesia Capital Market Dierctory (ICMD. Data yang memenuhi kriteria penelitian terdapat 28 perusahaan. Penelitian ini menggunakan alat uji statistik dengan pendekatan analisis regresi linier berganda dengan tingkat signifikansi 5%. Kesimpulan pengujian diambil berdasarkan hasil uji t-Statistik. Hasil pengujian ini menunjukkan bahwa pengujian pada variabel laba bersih terhadap dividen kas terbukti berpengaruh positif dan signifikan. Pengujian pada variabel arus kas operasional terhadap dividen kas terbukti berpengaruh positif dan signifikan.Pengujian pada variabel investment

  15. Body size, swimming speed, or thermal sensitivity? Predator-imposed selection on amphibian larvae.

    Science.gov (United States)

    Gvoždík, Lumír; Smolinský, Radovan

    2015-11-02

    Many animals rely on their escape performance during predator encounters. Because of its dependence on body size and temperature, escape velocity is fully characterized by three measures, absolute value, size-corrected value, and its response to temperature (thermal sensitivity). The primary target of the selection imposed by predators is poorly understood. We examined predator (dragonfly larva)-imposed selection on prey (newt larvae) body size and characteristics of escape velocity using replicated and controlled predation experiments under seminatural conditions. Specifically, because these species experience a wide range of temperatures throughout their larval phases, we predict that larvae achieving high swimming velocities across temperatures will have a selective advantage over more thermally sensitive individuals. Nonzero selection differentials indicated that predators selected for prey body size and both absolute and size-corrected maximum swimming velocity. Comparison of selection differentials with control confirmed selection only on body size, i.e., dragonfly larvae preferably preyed on small newt larvae. Maximum swimming velocity and its thermal sensitivity showed low group repeatability, which contributed to non-detectable selection on both characteristics of escape performance. In the newt-dragonfly larvae interaction, body size plays a more important role than maximum values and thermal sensitivity of swimming velocity during predator escape. This corroborates the general importance of body size in predator-prey interactions. The absence of an appropriate control in predation experiments may lead to potentially misleading conclusions about the primary target of predator-imposed selection. Insights from predation experiments contribute to our understanding of the link between performance and fitness, and further improve mechanistic models of predator-prey interactions and food web dynamics.

  16. Superfast maximum-likelihood reconstruction for quantum tomography

    Science.gov (United States)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  17. PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation

    Energy Technology Data Exchange (ETDEWEB)

    Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.

    2007-06-23

    In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.

  18. A novel maximum power point tracking method for PV systems using fuzzy cognitive networks (FCN)

    Energy Technology Data Exchange (ETDEWEB)

    Karlis, A.D. [Electrical Machines Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece); Kottas, T.L.; Boutalis, Y.S. [Automatic Control Systems Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece)

    2007-03-15

    Maximum power point trackers (MPPTs) play an important role in photovoltaic (PV) power systems because they maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency. This paper presents a novel MPPT method based on fuzzy cognitive networks (FCN). The new method gives a good maximum power operation of any PV array under different conditions such as changing insolation and temperature. The numerical results show the effectiveness of the proposed algorithm. (author)

  19. Transient dwarfism of soil fauna during the Paleocene-Eocene Thermal Maximum.

    Science.gov (United States)

    Smith, Jon J; Hasiotis, Stephen T; Kraus, Mary J; Woody, Daniel T

    2009-10-20

    Soil organisms, as recorded by trace fossils in paleosols of the Willwood Formation, Wyoming, show significant body-size reductions and increased abundances during the Paleocene-Eocene Thermal Maximum (PETM). Paleobotanical, paleopedologic, and oxygen isotope studies indicate high temperatures during the PETM and sharp declines in precipitation compared with late Paleocene estimates. Insect and oligochaete burrows increase in abundance during the PETM, suggesting longer periods of soil development and improved drainage conditions. Crayfish burrows and molluscan body fossils, abundant below and above the PETM interval, are significantly less abundant during the PETM, likely because of drier floodplain conditions and lower water tables. Burrow diameters of the most abundant ichnofossils are 30-46% smaller within the PETM interval. As burrow size is a proxy for body size, significant reductions in burrow diameter suggest that their tracemakers were smaller bodied. Smaller body sizes may have resulted from higher subsurface temperatures, lower soil moisture conditions, or nutritionally deficient vegetation in the high-CO(2) atmosphere inferred for the PETM. Smaller soil fauna co-occur with dwarf mammal taxa during the PETM; thus, a common forcing mechanism may have selected for small size in both above- and below-ground terrestrial communities. We predict that soil fauna have already shown reductions in size over the last 150 years of increased atmospheric CO(2) and surface temperatures or that they will exhibit this pattern over the next century. We retrodict also that soil fauna across the Permian-Triassic and Triassic-Jurassic boundary events show significant size decreases because of similar forcing mechanisms driven by rapid global warming.

  20. Transient dwarfism of soil fauna during the Paleocene-Eocene Thermal Maximum

    Science.gov (United States)

    Smith, J.J.; Hasiotis, S.T.; Kraus, M.J.; Woody, D.T.

    2009-01-01

    Soil organisms, as recorded by trace fossils in paleosols of the Willwood Formation, Wyoming, show significant body-size reductions and increased abundances during the Paleocene-Eocene Thermal Maximum (PETM). Paleobotanical, paleopedologic, and oxygen isotope studies indicate high temperatures during the PETM and sharp declines in precipitation compared with late Paleocene estimates. Insect and oligochaete burrows increase in abundance during the PETM, suggesting longer periods of soil development and improved drainage conditions. Crayfish burrows and molluscan body fossils, abundant below and above the PETM interval, are significantly less abundant during the PETM, likely because of drier floodplain conditions and lower water tables. Burrow diameters of the most abundant ichnofossils are 30-46% smaller within the PETM interval. As burrow size is a proxy for body size, significant reductions in burrow diameter suggest that their tracemakers were smaller bodied. Smaller body sizes may have resulted from higher subsurface temperatures, lower soil moisture conditions, or nutritionally deficient vegetation in the high-CO2 atmosphere inferred for the PETM. Smaller soil fauna co-occur with dwarf mammal taxa during the PETM; thus, a common forcing mechanism may have selected for small size in both above- and below-ground terrestrial communities. We predict that soil fauna have already shown reductions in size over the last 150 years of increased atmospheric CO2 and surface temperatures or that they will exhibit this pattern over the next century. We retrodict also that soil fauna across the Permian-Triassic and Triassic-Jurassic boundary events show significant size decreases because of similar forcing mechanisms driven by rapid global warming.

  1. Maximum entropy principle and hydrodynamic models in statistical mechanics

    International Nuclear Information System (INIS)

    Trovato, M.; Reggiani, L.

    2012-01-01

    This review presents the state of the art of the maximum entropy principle (MEP) in its classical and quantum (QMEP) formulation. Within the classical MEP we overview a general theory able to provide, in a dynamical context, the macroscopic relevant variables for carrier transport in the presence of electric fields of arbitrary strength. For the macroscopic variables the linearized maximum entropy approach is developed including full-band effects within a total energy scheme. Under spatially homogeneous conditions, we construct a closed set of hydrodynamic equations for the small-signal (dynamic) response of the macroscopic variables. The coupling between the driving field and the energy dissipation is analyzed quantitatively by using an arbitrary number of moments of the distribution function. Analogously, the theoretical approach is applied to many one-dimensional n + nn + submicron Si structures by using different band structure models, different doping profiles, different applied biases and is validated by comparing numerical calculations with ensemble Monte Carlo simulations and with available experimental data. Within the quantum MEP we introduce a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is then asserted as fundamental principle of quantum statistical mechanics. Accordingly, we have developed a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theory is formulated both in thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ħ 2 , being ħ the reduced Planck constant. In particular, by using an arbitrary number of moments, we prove that: i) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives both of the

  2. EFFECT OF FARM SIZE AND FREQUENCY OF CUTTING ON ...

    African Journals Online (AJOL)

    EFFECT OF FARM SIZE AND FREQUENCY OF CUTTING ON OUTPUT OF ... the use of Ordinary Least Square (OLS) estimation technique was used in analyzing ... frequency of cutting that would produce maximum output of the vegetable as ...

  3. Dependence of size and size distribution on reactivity of aluminum nanoparticles in reactions with oxygen and MoO3

    International Nuclear Information System (INIS)

    Sun, Juan; Pantoya, Michelle L.; Simon, Sindee L.

    2006-01-01

    The oxidation reaction of aluminum nanoparticles with oxygen gas and the thermal behavior of a metastable intermolecular composite (MIC) composed of the aluminum nanoparticles and molybdenum trioxide are studied with differential scanning calorimetry (DSC) as a function of the size and size distribution of the aluminum particles. Both broad and narrow size distributions have been investigated with aluminum particle sizes ranging from 30 to 160 nm; comparisons are also made to the behavior of micrometer-size particles. Several parameters have been used to characterize the reactivity of aluminum nanoparticles, including the fraction of aluminum that reacts prior to aluminum melting, heat of reaction, onset and peak temperatures, and maximum reaction rates. The results indicate that the reactivity of aluminum nanoparticles is significantly higher than that of the micrometer-size samples, but depending on the measure of reactivity, it may also depend strongly on the size distribution. The isoconversional method was used to calculate the apparent activation energy, and the values obtained for both the Al/O 2 and Al/MoO 3 reaction are in the range of 200-300 kJ/mol

  4. Prognostic significance of tumor size of small lung adenocarcinomas evaluated with mediastinal window settings on computed tomography.

    Directory of Open Access Journals (Sweden)

    Yukinori Sakao

    Full Text Available BACKGROUND: We aimed to clarify that the size of the lung adenocarcinoma evaluated using mediastinal window on computed tomography is an important and useful modality for predicting invasiveness, lymph node metastasis and prognosis in small adenocarcinoma. METHODS: We evaluated 176 patients with small lung adenocarcinomas (diameter, 1-3 cm who underwent standard surgical resection. Tumours were examined using computed tomography with thin section conditions (1.25 mm thick on high-resolution computed tomography with tumour dimensions evaluated under two settings: lung window and mediastinal window. We also determined the patient age, gender, preoperative nodal status, tumour size, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and pathological status (lymphatic vessel, vascular vessel or pleural invasion. Recurrence-free survival was used for prognosis. RESULTS: Lung window, mediastinal window, tumour disappearance ratio and preoperative nodal status were significant predictive factors for recurrence-free survival in univariate analyses. Areas under the receiver operator curves for recurrence were 0.76, 0.73 and 0.65 for mediastinal window, tumour disappearance ratio and lung window, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant predictive factors for lymph node metastasis in univariate analyses; areas under the receiver operator curves were 0.61, 0.76, 0.72 and 0.66, for lung window, mediastinal window, tumour disappearance ratio and preoperative serum carcinoembryonic antigen levels, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant factors for lymphatic vessel, vascular vessel or pleural invasion in univariate analyses; areas under the receiver operator curves were 0

  5. Prognostic Significance of Tumor Size of Small Lung Adenocarcinomas Evaluated with Mediastinal Window Settings on Computed Tomography

    Science.gov (United States)

    Sakao, Yukinori; Kuroda, Hiroaki; Mun, Mingyon; Uehara, Hirofumi; Motoi, Noriko; Ishikawa, Yuichi; Nakagawa, Ken; Okumura, Sakae

    2014-01-01

    Background We aimed to clarify that the size of the lung adenocarcinoma evaluated using mediastinal window on computed tomography is an important and useful modality for predicting invasiveness, lymph node metastasis and prognosis in small adenocarcinoma. Methods We evaluated 176 patients with small lung adenocarcinomas (diameter, 1–3 cm) who underwent standard surgical resection. Tumours were examined using computed tomography with thin section conditions (1.25 mm thick on high-resolution computed tomography) with tumour dimensions evaluated under two settings: lung window and mediastinal window. We also determined the patient age, gender, preoperative nodal status, tumour size, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and pathological status (lymphatic vessel, vascular vessel or pleural invasion). Recurrence-free survival was used for prognosis. Results Lung window, mediastinal window, tumour disappearance ratio and preoperative nodal status were significant predictive factors for recurrence-free survival in univariate analyses. Areas under the receiver operator curves for recurrence were 0.76, 0.73 and 0.65 for mediastinal window, tumour disappearance ratio and lung window, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant predictive factors for lymph node metastasis in univariate analyses; areas under the receiver operator curves were 0.61, 0.76, 0.72 and 0.66, for lung window, mediastinal window, tumour disappearance ratio and preoperative serum carcinoembryonic antigen levels, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant factors for lymphatic vessel, vascular vessel or pleural invasion in univariate analyses; areas under the receiver operator curves were 0.60, 0.81, 0

  6. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    Science.gov (United States)

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-07

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  7. "Size-Independent" Single-Electron Tunneling.

    Science.gov (United States)

    Zhao, Jianli; Sun, Shasha; Swartz, Logan; Riechers, Shawn; Hu, Peiguang; Chen, Shaowei; Zheng, Jie; Liu, Gang-Yu

    2015-12-17

    Incorporating single-electron tunneling (SET) of metallic nanoparticles (NPs) into modern electronic devices offers great promise to enable new properties; however, it is technically very challenging due to the necessity to integrate ultrasmall (<10 nm) particles into the devices. The nanosize requirements are intrinsic for NPs to exhibit quantum or SET behaviors, for example, 10 nm or smaller, at room temperature. This work represents the first observation of SET that defies the well-known size restriction. Using polycrystalline Au NPs synthesized via our newly developed solid-state glycine matrices method, a Coulomb Blockade was observed for particles as large as tens of nanometers, and the blockade voltage exhibited little dependence on the size of the NPs. These observations are counterintuitive at first glance. Further investigations reveal that each observed SET arises from the ultrasmall single crystalline grain(s) within the polycrystal NP, which is (are) sufficiently isolated from the nearest neighbor grains. This work demonstrates the concept and feasibility to overcome orthodox spatial confinement requirements to achieve quantum effects.

  8. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  9. Strong crystal size effect on deformation twinning

    DEFF Research Database (Denmark)

    Yu, Qian; Shan, Zhi-Wei; Li, Ju

    2010-01-01

    plasticity. Accompanying the transition in deformation mechanism, the maximum flow stress of the submicrometre-sized pillars was observed to saturate at a value close to titanium’s ideal strength9, 10. We develop a ‘stimulated slip’ model to explain the strong size dependence of deformation twinning......Deformation twinning1, 2, 3, 4, 5, 6 in crystals is a highly coherent inelastic shearing process that controls the mechanical behaviour of many materials, but its origin and spatio-temporal features are shrouded in mystery. Using micro-compression and in situ nano-compression experiments, here we...... find that the stress required for deformation twinning increases drastically with decreasing sample size of a titanium alloy single crystal7, 8, until the sample size is reduced to one micrometre, below which the deformation twinning is entirely replaced by less correlated, ordinary dislocation...

  10. Maximum credible accident analysis for TR-2 reactor conceptual design

    International Nuclear Information System (INIS)

    Manopulo, E.

    1981-01-01

    A new reactor, TR-2, of 5 MW, designed in cooperation with CEN/GRENOBLE is under construction in the open pool of TR-1 reactor of 1 MW set up by AMF atomics at the Cekmece Nuclear Research and Training Center. In this report the fission product inventory and doses released after the maximum credible accident have been studied. The diffusion of the gaseous fission products to the environment and the potential radiation risks to the population have been evaluated

  11. Space power subsystem sizing

    International Nuclear Information System (INIS)

    Geis, J.W.

    1992-01-01

    This paper discusses a Space Power Subsystem Sizing program which has been developed by the Aerospace Power Division of Wright Laboratory, Wright-Patterson Air Force Base, Ohio. The Space Power Subsystem program (SPSS) contains the necessary equations and algorithms to calculate photovoltaic array power performance, including end-of-life (EOL) and beginning-of-life (BOL) specific power (W/kg) and areal power density (W/m 2 ). Additional equations and algorithms are included in the spreadsheet for determining maximum eclipse time as a function of orbital altitude, and inclination. The Space Power Subsystem Sizing program (SPSS) has been used to determine the performance of several candidate power subsystems for both Air Force and SDIO potential applications. Trade-offs have been made between subsystem weight and areal power density (W/m 2 ) as influenced by orbital high energy particle flux and time in orbit

  12. The Emotional Climate of the Interpersonal Classroom in a Maximum Security Prison for Males.

    Science.gov (United States)

    Meussling, Vonne

    1984-01-01

    Examines the nature, the task, and the impact of teaching in a maximum security prison for males. Data are presented concerning the curriculum design used in order to create a nonevaluative atmosphere. Inmates' reactions to self-disclosure and open communication in a prison setting are evaluated. (CT)

  13. Review of probable maximum flood definition at B.C. Hydro

    International Nuclear Information System (INIS)

    Keenhan, P.T.; Kroeker, M.G.; Neudorf, P.A.

    1991-01-01

    Probable maximum floods (PMF) have been derived for British Columbia Hydro structures since design of the W.C. Bennet Dam in 1965. A dam safety program for estimating PMF for structures designed before that time has been ongoing since 1979. The program, which has resulted in rehabilitative measures at dams not meeting current established standards, is now being directed at the more recently constructed larger structures on the Peace and Columbia rivers. Since 1965 detailed studies have produced 23 probable maximum precipitation (PMP) and 24 PMF estimates. What defines a PMF in British Columbia in terms of an appropriate combination of meteorological conditions varies due to basin size and the climatic effect of mountain barriers. PMP is estimated using three methods: storm maximization and transposition, orographic separation method, and modification of non-orographic PMP for orography. Details of, and problems encountered with, these methods are discussed. Tools or methods to assess meterological limits for antecedant conditions and for limits to runoff during extreme events have not been developed and require research effort. 11 refs., 2 figs., 3 tabs

  14. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  15. Cosmic structure sizes in generic dark energy models

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, Sourav [Indian Institute of Technology Ropar, Department of Physics, Rupnagar, Punjab (India); Tomaras, Theodore N. [ITCP and Department of Physics, University of Crete, Heraklion (Greece)

    2017-08-15

    The maximum allowable size of a spherical cosmic structure as a function of its mass is determined by the maximum turn around radius R{sub TA,max}, the distance from its center where the attraction on a radial test particle due to the spherical mass is balanced with the repulsion due to the ambient dark energy. In this work, we extend the existing results in several directions. (a) We first show that, for w ≠ -1, the expression for R{sub TA,max} found earlier, using the cosmological perturbation theory, can be derived using a static geometry as well. (b) In the generic dark energy model with arbitrary time dependent state parameter w(t), taking into account the effect of inhomogeneities upon the dark energy as well, it is shown that the data constrain w(t = today) > -2.3. (c) We address the quintessence and the generalized Chaplygin gas models, both of which are shown to predict structure sizes consistent with observations. (orig.)

  16. The relationship between bed size and profitability in South Carolina hospitals.

    Science.gov (United States)

    Kim, Yang K; Glover, Saundra H; Stoskopf, Carleen H; Boyd, Suzan D

    2002-01-01

    The purpose of the study is to identify factors affecting hospital profitability and to find the optimal hospital bed size that assures maximum profit. This is a cross-sectional study using survey data obtained from acute care hospitals in South Carolina in 1997. The relationship of hospital profitability and hospital bed size revealed that when bed size increases, hospital profitability increases, decreases, and then increases again. For the patient profit proportion, the turning points in bed size are 238.22 and 560.08. For the total profit proportion, the turning points in bed size are 223.31 and 503.86. The results on the relationship between bed size and hospital profitability indicate that medium-size hospitals have less profitability.

  17. A method of size inspection for fruit with machine vision

    Science.gov (United States)

    Rao, Xiuqin; Ying, Yibin

    2005-11-01

    A real time machine vision system for fruit quality inspection was developed, which consists of rollers, an encoder, a lighting chamber, a TMS-7DSP CCD camera (PULNIX Inc.), a computer (P4 1.8G, 128M) and a set of grading controller. An image was binary, and the edge was detected with line-scanned based digit image description, and the MER was applied to detected size of the fruit, but failed. The reason for the result was that the test point with MER was different from which was done with vernier caliper. An improved method was developed, which was called as software vernier caliper. A line between weight O of the fruit and a point A on the edge was drawn, and then the crossed point between line OA and the edge was calculated, which was noted as B, a point C between AB was selected, and the point D on the other side was searched by a way to make CD was vertical to AB, by move the point C between point A and B, A new point D was searched. The maximum length of CD was recorded as an extremum value. By move point A from start to the half point on the edge, a serial of CD was gotten. 80 navel oranges were tested, the maximum error of the diameter was less than 1mm.

  18. Maximum power point tracking: a cost saving necessity in solar energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Enslin, J H.R. [Stellenbosch Univ. (South Africa). Dept. of Electrical and Electronic Engineering

    1992-12-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking (MPPT) can improve cost effectiveness, has a higher reliability and can improve the quality of life in remote areas. A high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of between 15 and 25% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply (RAPS) systems. The advantages at large temperature variations and high power rated systems are much higher. Other advantages include optimal sizing and system monitor and control. (author).

  19. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  20. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  1. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  2. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  3. MOCUS, Minimal Cut Sets and Minimal Path Sets from Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.; Henry, E.B.; Marshall, N.H.

    1976-01-01

    1 - Description of problem or function: From a description of the Boolean failure logic of a system, called a fault tree, and control parameters specifying the minimal cut set length to be obtained MOCUS determines the system failure modes, or minimal cut sets, and the system success modes, or minimal path sets. 2 - Method of solution: MOCUS uses direct resolution of the fault tree into the cut and path sets. The algorithm used starts with the main failure of interest, the top event, and proceeds to basic independent component failures, called primary events, to resolve the fault tree to obtain the minimal sets. A key point of the algorithm is that an and gate alone always increases the number of path sets; an or gate alone always increases the number of cut sets and increases the size of path sets. Other types of logic gates must be described in terms of and and or logic gates. 3 - Restrictions on the complexity of the problem: Output from MOCUS can include minimal cut and path sets for up to 20 gates

  4. Size matters: the interplay between sensing and size in aquatic environments

    Science.gov (United States)

    Wadhwa, Navish; Martens, Erik A.; Lindemann, Christian; Jacobsen, Nis S.; Andersen, Ken H.; Visser, Andre

    2015-11-01

    Sensing the presence or absence of other organisms in the surroundings is critical for the survival of any aquatic organism. This is achieved via the use of various sensory modes such as chemosensing, mechanosensing, vision, hearing, and echolocation. We ask how the size of an organism determines what sensory modes are available to it while others are not. We investigate this by examining the physical laws governing signal generation, transmission, and reception, together with the limits set by physiology. Hydrodynamics plays an important role in sensing; in particular chemosensing and mechanosensing are constrained by the physics of fluid motion at various scales. Through our analysis, we find a hierarchy of sensing modes determined by body size. We theoretically predict the body size limits for various sensory modes, which align well with size ranges found in the literature. Our analysis of all ocean life, from unicellular organisms to whales, demonstrates how body size determines available sensing modes, and thereby acts as a major structuring factor of aquatic life. The Centre for Ocean Life is a VKR center of excellence supported by the Villum Foundation.

  5. Size-Dictionary Interpolation for Robot's Adjustment

    Directory of Open Access Journals (Sweden)

    Morteza eDaneshmand

    2015-05-01

    Full Text Available This paper describes the classification and size-dictionary interpolation of the three-dimensional data obtained by a laser scanner to be used in a realistic virtual fitting room, where automatic activation of the chosen mannequin robot, while several mannequin robots of different genders and sizes are simultaneously connected to the same computer, is also considered to make it mimic the body shapes and sizes instantly. The classification process consists of two layers, dealing, respectively, with gender and size. The interpolation procedure tries to find out which set of the positions of the biologically-inspired actuators for activation of the mannequin robots could lead to the closest possible resemblance of the shape of the body of the person having been scanned, through linearly mapping the distances between the subsequent size-templates and the corresponding position set of the bioengineered actuators, and subsequently, calculating the control measures that could maintain the same distance proportions, where minimizing the Euclidean distance between the size-dictionary template vectors and that of the desired body sizes determines the mathematical description. In this research work, the experimental results of the implementation of the proposed method on Fits.me's mannequin robots are visually illustrated, and explanation of the remaining steps towards completion of the whole realistic online fitting package is provided.

  6. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  7. Maximum relative speeds of living organisms: Why do bacteria perform as fast as ostriches?

    Science.gov (United States)

    Meyer-Vernet, Nicole; Rospars, Jean-Pierre

    2016-12-01

    Self-locomotion is central to animal behaviour and survival. It is generally analysed by focusing on preferred speeds and gaits under particular biological and physical constraints. In the present paper we focus instead on the maximum speed and we study its order-of-magnitude scaling with body size, from bacteria to the largest terrestrial and aquatic organisms. Using data for about 460 species of various taxonomic groups, we find a maximum relative speed of the order of magnitude of ten body lengths per second over a 1020-fold mass range of running and swimming animals. This result implies a locomotor time scale of the order of one tenth of second, virtually independent on body size, anatomy and locomotion style, whose ubiquity requires an explanation building on basic properties of motile organisms. From first-principle estimates, we relate this generic time scale to other basic biological properties, using in particular the recent generalisation of the muscle specific tension to molecular motors. Finally, we go a step further by relating this time scale to still more basic quantities, as environmental conditions at Earth in addition to fundamental physical and chemical constants.

  8. Green Lot-Sizing

    NARCIS (Netherlands)

    M. Retel Helmrich (Mathijn Jan)

    2013-01-01

    textabstractThe lot-sizing problem concerns a manufacturer that needs to solve a production planning problem. The producer must decide at which points in time to set up a production process, and when he/she does, how much to produce. There is a trade-off between inventory costs and costs associated

  9. Soil and Water Assessment Tool model predictions of annual maximum pesticide concentrations in high vulnerability watersheds.

    Science.gov (United States)

    Winchell, Michael F; Peranginangin, Natalia; Srinivasan, Raghavan; Chen, Wenlin

    2018-05-01

    Recent national regulatory assessments of potential pesticide exposure of threatened and endangered species in aquatic habitats have led to increased need for watershed-scale predictions of pesticide concentrations in flowing water bodies. This study was conducted to assess the ability of the uncalibrated Soil and Water Assessment Tool (SWAT) to predict annual maximum pesticide concentrations in the flowing water bodies of highly vulnerable small- to medium-sized watersheds. The SWAT was applied to 27 watersheds, largely within the midwest corn belt of the United States, ranging from 20 to 386 km 2 , and evaluated using consistent input data sets and an uncalibrated parameterization approach. The watersheds were selected from the Atrazine Ecological Exposure Monitoring Program and the Heidelberg Tributary Loading Program, both of which contain high temporal resolution atrazine sampling data from watersheds with exceptionally high vulnerability to atrazine exposure. The model performance was assessed based upon predictions of annual maximum atrazine concentrations in 1-d and 60-d durations, predictions critical in pesticide-threatened and endangered species risk assessments when evaluating potential acute and chronic exposure to aquatic organisms. The simulation results showed that for nearly half of the watersheds simulated, the uncalibrated SWAT model was able to predict annual maximum pesticide concentrations within a narrow range of uncertainty resulting from atrazine application timing patterns. An uncalibrated model's predictive performance is essential for the assessment of pesticide exposure in flowing water bodies, the majority of which have insufficient monitoring data for direct calibration, even in data-rich countries. In situations in which SWAT over- or underpredicted the annual maximum concentrations, the magnitude of the over- or underprediction was commonly less than a factor of 2, indicating that the model and uncalibrated parameterization

  10. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  11. Dinosaurs, dragons, and dwarfs: The evolution of maximal body size

    Science.gov (United States)

    Burness, Gary P.; Diamond, Jared; Flannery, Timothy

    2001-01-01

    Among local faunas, the maximum body size and taxonomic affiliation of the top terrestrial vertebrate vary greatly. Does this variation reflect how food requirements differ between trophic levels (herbivores vs. carnivores) and with taxonomic affiliation (mammals and birds vs. reptiles)? We gathered data on the body size and food requirements of the top terrestrial herbivores and carnivores, over the past 65,000 years, from oceanic islands and continents. The body mass of the top species was found to increase with increasing land area, with a slope similar to that of the relation between body mass and home range area, suggesting that maximum body size is determined by the number of home ranges that can fit into a given land area. For a given land area, the body size of the top species decreased in the sequence: ectothermic herbivore > endothermic herbivore > ectothermic carnivore > endothermic carnivore. When we converted body mass to food requirements, the food consumption of a top herbivore was about 8 times that of a top carnivore, in accord with the factor expected from the trophic pyramid. Although top ectotherms were heavier than top endotherms at a given trophic level, lower metabolic rates per gram of body mass in ectotherms resulted in endotherms and ectotherms having the same food consumption. These patterns explain the size of the largest-ever extinct mammal, but the size of the largest dinosaurs exceeds that predicted from land areas and remains unexplained. PMID:11724953

  12. Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.

    Directory of Open Access Journals (Sweden)

    Richard R Stein

    2015-07-01

    Full Text Available Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design.

  13. ON THE MAXIMUM MASS OF STELLAR BLACK HOLES

    International Nuclear Information System (INIS)

    Belczynski, Krzysztof; Fryer, Chris L.; Bulik, Tomasz; Ruiter, Ashley; Valsecchi, Francesca; Vink, Jorick S.; Hurley, Jarrod R.

    2010-01-01

    We present the spectrum of compact object masses: neutron stars and black holes (BHs) that originate from single stars in different environments. In particular, we calculate the dependence of maximum BH mass on metallicity and on some specific wind mass loss rates (e.g., Hurley et al. and Vink et al.). Our calculations show that the highest mass BHs observed in the Galaxy M bh ∼ 15 M sun in the high metallicity environment (Z = Z sun = 0.02) can be explained with stellar models and the wind mass loss rates adopted here. To reach this result we had to set luminous blue variable mass loss rates at the level of ∼10 -4 M sun yr -1 and to employ metallicity-dependent Wolf-Rayet winds. With such winds, calibrated on Galactic BH mass measurements, the maximum BH mass obtained for moderate metallicity (Z = 0.3 Z sun = 0.006) is M bh,max = 30 M sun . This is a rather striking finding as the mass of the most massive known stellar BH is M bh = 23-34 M sun and, in fact, it is located in a small star-forming galaxy with moderate metallicity. We find that in the very low (globular cluster-like) metallicity environment the maximum BH mass can be as high as M bh,max = 80 M sun (Z = 0.01 Z sun = 0.0002). It is interesting to note that X-ray luminosity from Eddington-limited accretion onto an 80 M sun BH is of the order of ∼10 40 erg s -1 and is comparable to luminosities of some known ultra-luminous X-ray sources. We emphasize that our results were obtained for single stars only and that binary interactions may alter these maximum BH masses (e.g., accretion from a close companion). This is strictly a proof-of-principle study which demonstrates that stellar models can naturally explain even the most massive known stellar BHs.

  14. New Maximal Two-distance Sets

    DEFF Research Database (Denmark)

    Lisonek, Petr

    1996-01-01

    A two-distance set in E^d is a point set X inthe d-dimensional Euclidean spacesuch that the distances between distinct points in Xassume only two different non-zero values. Based on results from classical distance geometry, we developan algorithm to classify, for a given dimension, all maximal...... (largest possible)two-distance sets in E^d.Using this algorithm we have completed the full classificationfor all dimensions less than or equal to 7, andwe have found one set in E^8 whosemaximality follows from Blokhuis' upper bound on sizes of s-distance sets.While in the dimensions less than or equal to 6...

  15. Optimized Basis Sets for the Environment in the Domain-Specific Basis Set Approach of the Incremental Scheme.

    Science.gov (United States)

    Anacker, Tony; Hill, J Grant; Friedrich, Joachim

    2016-04-21

    Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.

  16. Social polyandry, parental investment, sexual selection, and evolution of reduced female gamete size.

    Science.gov (United States)

    Andersson, Malte

    2004-01-01

    Sexual selection in the form of sperm competition is a major explanation for small size of male gametes. Can sexual selection in polyandrous species with reversed sex roles also lead to reduced female gamete size? Comparative studies show that egg size in birds tends to decrease as a lineage evolves social polyandry. Here, a quantitative genetic model predicts that female scrambles over mates lead to evolution of reduced female gamete size. Increased female mating success drives the evolution of smaller eggs, which take less time to produce, until balanced by lowered offspring survival. Mean egg size is usually reduced and polyandry increased by increasing sex ratio (male bias) and maximum possible number of mates. Polyandry also increases with the asynchrony (variance) in female breeding start. Opportunity for sexual selection increases with the maximum number of mates but decreases with increasing sex ratio. It is well known that parental investment can affect sexual selection. The model suggests that the influence is mutual: owing to a coevolutionary feedback loop, sexual selection in females also shapes initial parental investment by reducing egg size. Feedback between sexual selection and parental investment may be common.

  17. Squamate hatchling size and the evolutionary causes of negative offspring size allometry.

    Science.gov (United States)

    Meiri, S; Feldman, A; Kratochvíl, L

    2015-02-01

    Although fecundity selection is ubiquitous, in an overwhelming majority of animal lineages, small species produce smaller number of offspring per clutch. In this context, egg, hatchling and neonate sizes are absolutely larger, but smaller relative to adult body size in larger species. The evolutionary causes of this widespread phenomenon are not fully explored. The negative offspring size allometry can result from processes limiting maximal egg/offspring size forcing larger species to produce relatively smaller offspring ('upper limit'), or from a limit on minimal egg/offspring size forcing smaller species to produce relatively larger offspring ('lower limit'). Several reptile lineages have invariant clutch sizes, where females always lay either one or two eggs per clutch. These lineages offer an interesting perspective on the general evolutionary forces driving negative offspring size allometry, because an important selective factor, fecundity selection in a single clutch, is eliminated here. Under the upper limit hypotheses, large offspring should be selected against in lineages with invariant clutch sizes as well, and these lineages should therefore exhibit the same, or shallower, offspring size allometry as lineages with variable clutch size. On the other hand, the lower limit hypotheses would allow lineages with invariant clutch sizes to have steeper offspring size allometries. Using an extensive data set on the hatchling and female sizes of > 1800 species of squamates, we document that negative offspring size allometry is widespread in lizards and snakes with variable clutch sizes and that some lineages with invariant clutch sizes have unusually steep offspring size allometries. These findings suggest that the negative offspring size allometry is driven by a constraint on minimal offspring size, which scales with a negative allometry. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary

  18. Simulation of finite size effects of the fiber bundle model

    Science.gov (United States)

    Hao, Da-Peng; Tang, Gang; Xun, Zhi-Peng; Xia, Hui; Han, Kui

    2018-01-01

    In theory, the macroscopic fracture of materials should correspond with the thermodynamic limit of the fiber bundle model. However, the simulation of a fiber bundle model with an infinite size is unrealistic. To study the finite size effects of the fiber bundle model, fiber bundle models of various size are simulated in detail. The effects of system size on the constitutive behavior, critical stress, maximum avalanche size, avalanche size distribution, and increased step number of external load are explored. The simulation results imply that there is no feature size or cut size for macroscopic mechanical and statistical properties of the model. The constitutive curves near the macroscopic failure for various system size can collapse well with a simple scaling relationship. Simultaneously, the introduction of a simple extrapolation method facilitates the acquisition of more accurate simulation results in a large-limit system, which is better for comparison with theoretical results.

  19. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  20. Synthesis, optical characterization, and size distribution determination by curve resolution methods of water-soluble CdSe quantum dots

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Calink Indiara do Livramento; Carvalho, Melissa Souza; Raphael, Ellen; Ferrari, Jefferson Luis; Schiavon, Marco Antonio, E-mail: schiavon@ufsj.edu.br [Universidade Federal de Sao Joao del-Rei (UFSJ), MG (Brazil). Grupo de Pesquisa em Quimica de Materiais; Dantas, Clecio [Universidade Estadual do Maranhao (LQCINMETRIA/UEMA), Caxias, MA (Brazil). Lab. de Quimica Computacional Inorganica e Quimiometria

    2016-11-15

    In this work a colloidal approach to synthesize water-soluble CdSe quantum dots (QDs) bearing a surface ligand, such as thioglycolic acid (TGA), 3-mercaptopropionic acid (MPA), glutathione (GSH), or thioglycerol (TGH) was applied. The synthesized material was characterized by X-ray diffraction (XRD), Fourier-transform infrared spectroscopy (FT-IR), UV-visible spectroscopy (UV-Vis), and fluorescence spectroscopy (PL). Additionally, a comparative study of the optical properties of different CdSe QDs was performed, demonstrating how the surface ligand affected crystal growth. The particles sizes were calculated from a polynomial function that correlates the particle size with the maximum fluorescence position. Curve resolution methods (EFA and MCR-ALS) were employed to decompose a series of fluorescence spectra to investigate the CdSe QDs size distribution and determine the number of fraction with different particle size. The results for the MPA-capped CdSe sample showed only two main fraction with different particle sizes with maximum emission at 642 and 686 nm. The calculated diameters from these maximum emission were, respectively, 2.74 and 3.05 nm. (author)

  1. Benchmarking Methods and Data Sets for Ligand Enrichment Assessment in Virtual Screening

    Science.gov (United States)

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2014-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. “analogue bias”, “artificial enrichment” and “false negative”. In addition, we introduced our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylase (HDAC) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The Leave-One-Out Cross-Validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased in terms of property matching, ROC curves and AUCs. PMID:25481478

  2. Benchmarking methods and data sets for ligand enrichment assessment in virtual screening.

    Science.gov (United States)

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2015-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. "analogue bias", "artificial enrichment" and "false negative". In addition, we introduce our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylases (HDACs) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The leave-one-out cross-validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased as measured by property matching, ROC curves and AUCs. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. The causal effect of board size in the performance of small and medium-sized firms

    DEFF Research Database (Denmark)

    Bennedsen, Morten; Kongsted, Hans Christian; Meisner Nielsen, Kasper

    2008-01-01

    correlation between family size and board size and show this correlation to be driven by firms where the CEO's relatives serve on the board. Second, we find empirical evidence of a small adverse board size effect driven by the minority of small and medium-sized firms that are characterized by having......Empirical studies of large publicly traded firms have shown a robust negative relationship between board size and firm performance. The evidence on small and medium-sized firms is less clear; we show that existing work has been incomplete in analyzing the causal relationship due to weak...... identification strategies. Using a rich data set of almost 7000 closely held corporations we provide a causal analysis of board size effects on firm performance: We use a novel instrument given by the number of children of the chief executive officer (CEO) of the firms. First, we find a strong positive...

  4. Factors affecting seed set in brussels sprouts, radish and cyclamen

    NARCIS (Netherlands)

    Murabaa, El A.I.M.

    1957-01-01

    If brussels sprouts were, self-fertilized, seed setting increased with age of the flower buds until a maximum some days before buds opened. After that, set decreased rapidly. Warmth shortened the period over which selfing was possible and shortened the period to the opening of the flowers. Most

  5. International urodynamic basic spinal cord injury data set

    DEFF Research Database (Denmark)

    Craggs, M.; Kennelly, M.; Schick, E.

    2008-01-01

    of the data set was developed after review and comments by members of the Executive Committee of the International SCI Standards and Data Sets, the ISCoS Scientific Committee, ASIA Board, relevant and interested (international) organizations and societies (around 40) and persons and the ISCoS Council......: Variables included in the International Urodynamic Basic SCI Data Set are date of data collection, bladder sensation during filling cystometry, detrusor function, compliance during filing cystometry, function during voiding, detrusor leak point pressure, maximum detrusor pressure, cystometric bladder...

  6. Variations in the size of focal nodular hyperplasia on magnetic resonance imaging.

    Science.gov (United States)

    Ramírez-Fuentes, C; Martí-Bonmatí, L; Torregrosa, A; Del Val, A; Martínez, C

    2013-01-01

    To evaluate the changes in the size of focal nodular hyperplasia (FNH) during long-term magnetic resonance imaging (MRI) follow-up. We reviewed 44 FNHs in 30 patients studied with MRI with at least two MRI studies at least 12 months apart. We measured the largest diameter of the lesion (inmm) in contrast-enhanced axial images and calculated the percentage of variation as the difference between the maximum diameter in the follow-up and the maximum diameter in the initial study. We defined significant variation in size as variation greater than 20%. We also analyzed predisposing hormonal factors. The mean interval between the two imaging studies was 35±2 months (range: 12-94). Most lesions (80%) remained stable during follow-up. Only 9 of the 44 lesions (20%) showed a significant variation in diameter: 7 (16%) decreased in size and 2 (4%) increased, with variations that reached the double of the initial size. The change in size was not related to pregnancy, menopause, or the use of birth control pills or corticoids. Changes in the size of FNHs during follow-up are relatively common and should not lead to a change in the diagnosis. These variations in size seem to be independent of hormonal factors that are considered to predispose. Copyright © 2011 SERAM. Published by Elsevier Espana. All rights reserved.

  7. High variation in manufacturer-declared serving size of packaged discretionary foods in Australia.

    Science.gov (United States)

    Haskelberg, Hila; Neal, Bruce; Dunford, Elizabeth; Flood, Victoria; Rangan, Anna; Thomas, Beth; Cleanthous, Xenia; Trevena, Helen; Zheng, Jazzmin Miaobing; Louie, Jimmy Chun Yu; Gill, Timothy; Wu, Jason H Y

    2016-05-28

    Despite the potential of declared serving size to encourage appropriate portion size consumption, most countries including Australia have not developed clear reference guidelines for serving size. The present study evaluated variability in manufacturer-declared serving size of discretionary food and beverage products in Australia, and how declared serving size compared with the 2013 Australian Dietary Guideline (ADG) standard serve (600 kJ). Serving sizes were obtained from the Nutrition Information Panel for 4466 packaged, discretionary products in 2013 at four large supermarkets in Sydney, Australia, and categorised into fifteen categories in line with the 2013 ADG. For unique products that were sold in multiple package sizes, the percentage difference between the minimum and the maximum serving size across different package sizes was calculated. A high variation in serving size was found within the majority of food and beverage categories - for example, among 347 non-alcoholic beverages (e.g. soft drinks), the median for serving size was 250 (interquartile range (IQR) 250, 355) ml (range 100-750 ml). Declared serving size for unique products that are available in multiple package sizes also showed high variation, particularly for chocolate-based confectionery, with median percentage difference between minimum and maximum serving size of 183 (IQR 150) %. Categories with a high proportion of products that exceeded the 600 kJ ADG standard serve included cakes and muffins, pastries and desserts (≥74 % for each). High variability in declared serving size may confound interpretation and understanding of consumers interested in standardising and controlling their portion selection. Future research is needed to assess if and how standardising declared serving size might affect consumer behaviour.

  8. Analysis of Benthic Foraminiferal Size Change During the Eocene-Oligocene Transition

    Science.gov (United States)

    Zachary, W.; Keating-Bitonti, C.

    2017-12-01

    The Eocene-Oligocene transition is a significant global cooling event with the first growth of continental ice on Antarctica. In the geologic record, the size of fossils can be used to indirectly observe how organisms respond to climate change. For example, organisms tend to be larger in cooler environments as a physiological response to temperature. This major global cooling event should influence organism physiology, resulting in significant size trends observed in the fossil record. Benthic foraminifera are protists and those that grow a carbonate shell are both well-preserved and abundant in marine sediments. Here, we used the foraminiferal fossil record to study the relationship between their size and global cooling. We hypothesize that cooler temperatures across the Eocene-Oligocene boundary promoted shell size increase. To test this hypothesis, we studied benthic foraminifera from 10 deep-sea cores drilled at Ocean Drilling Program Site 744, located in the southern Indian Ocean. We washed sediment samples over a 63-micron sieve and picked foraminifera from a 125-micron sieve. We studied the benthic foraminiferal genus Cibicidoides and its size change across this cooling event. Picked specimens were imaged and we measured the diameter of their shells using "imageJ". Overall, we find that Cibicidoides shows a general trend of increasing size during this transition. In particular, both the median and maximum sizes of Cibicidoides increase from the Eocene into the Oligocene. We also analyzed C. pachyderma and C. mundulus for size trends. Although both species increase in median size across the boundary, only C. pachyderma shows a consistent trend of increasing maximum, median, and minimum shell diameter. After the Eocene-Oligocene boundary, we observe that shell diameter decreases following peak cooling and that foraminiferal sizes remain stable into the early Oligocene. Therefore, the Eocene-Oligocene cooling event appears to have strong influence on shell size.

  9. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    OpenAIRE

    Liu, Peng; Wang, Xiaoli

    2017-01-01

    A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due ...

  10. Does combined strength training and local vibration improve isometric maximum force? A pilot study.

    Science.gov (United States)

    Goebel, Ruben; Haddad, Monoem; Kleinöder, Heinz; Yue, Zengyuan; Heinen, Thomas; Mester, Joachim

    2017-01-01

    The aim of the study was to determine whether a combination of strength training (ST) and local vibration (LV) improved the isometric maximum force of arm flexor muscles. ST was applied to the left arm of the subjects; LV was applied to the right arm of the same subjects. The main aim was to examine the effect of LV during a dumbbell biceps curl (Scott Curl) on isometric maximum force of the opposite muscle among the same subjects. It is hypothesized, that the intervention with LV produces a greater gain in isometric force of the arm flexors than ST. Twenty-seven collegiate students participated in the study. The training load was 70% of the individual 1 RM. Four sets with 12 repetitions were performed three times per week during four weeks. The right arm of all subjects represented the vibration trained body side (VS) and the left arm served as the traditional trained body side (TTS). A significant increase of isometric maximum force in both body sides (Arms) occurred. VS, however, significantly increased isometric maximum force about 43% in contrast to 22% of the TTS. The combined intervention of ST and LC improves isometric maximum force of arm flexor muscles. III.

  11. Analysis of factors that influence the maximum number of repetitions in two upper-body resistance exercises: curl biceps and bench press.

    Science.gov (United States)

    Iglesias, Eliseo; Boullosa, Daniel A; Dopico, Xurxo; Carballeira, Eduardo

    2010-06-01

    The purpose of this study was to analyze the influence of exercise type, set configuration, and relative intensity load on relationship between 1 repetition maximum (1RM) and maximum number of repetitions (MNR). Thirteen male subjects, experienced in resistance training, were tested in bench press and biceps curl for 1RM, MNR at 90% of 1RM with cluster set configuration (rest of 30s between repetitions) and MNR at 70% of 1RM with traditional set configuration (no rest between repetitions). A lineal encoder was used for measuring displacement of load. Analysis of variance analysis revealed a significant effect of load (pbench press and biceps curl, respectively; pbench press and biceps curl, respectively; p>0.05). Correlation between 1RM and MNR was significant for medium-intensity in biceps curl (r=-0.574; pvelocity along set, so velocity seems to be similar at a same relative intensity for subjects with differences in maximum strength levels. From our results, we suggest the employment of MNR rather than % of 1RM for training monitoring. Furthermore, we suggest the introduction of cluster set configuration for upper-body assessment of MNR and for upper-body muscular endurance training at high-intensity loads, as it seems an efficient approach in looking for sessions with greater training volumes. This could be an interesting approach for such sports as wrestling or weightlifting.

  12. A ROBUST DETERMINATION OF THE SIZE OF QUASAR ACCRETION DISKS USING GRAVITATIONAL MICROLENSING

    International Nuclear Information System (INIS)

    Jiménez-Vicente, J.; Mediavilla, E.; Muñoz, J. A.; Kochanek, C. S.

    2012-01-01

    Using microlensing measurements for a sample of 27 image pairs of 19 lensed quasars we determine a maximum likelihood estimate for the accretion disk size of an average quasar of r s = 4.0 +2.4 –3.1 lt-day at rest frame (λ) = 1736 Å for microlenses with a mean mass of (M) = 0.3 M ☉ . This value, in good agreement with previous results from smaller samples, is roughly a factor of five greater than the predictions of the standard thin disk model. The individual size estimates for the 19 quasars in our sample are also in excellent agreement with the results of the joint maximum likelihood analysis.

  13. Constraints on pulsar masses from the maximum observed glitch

    Science.gov (United States)

    Pizzochero, P. M.; Antonelli, M.; Haskell, B.; Seveso, S.

    2017-07-01

    Neutron stars are unique cosmic laboratories in which fundamental physics can be probed in extreme conditions not accessible to terrestrial experiments. In particular, the precise timing of rotating magnetized neutron stars (pulsars) reveals sudden jumps in rotational frequency in these otherwise steadily spinning-down objects. These 'glitches' are thought to be due to the presence of a superfluid component in the star, and offer a unique glimpse into the interior physics of neutron stars. In this paper we propose an innovative method to constrain the mass of glitching pulsars, using observations of the maximum glitch observed in a star, together with state-of-the-art microphysical models of the pinning interaction between superfluid vortices and ions in the crust. We study the properties of a physically consistent angular momentum reservoir of pinned vorticity, and we find a general inverse relation between the size of the maximum glitch and the pulsar mass. We are then able to estimate the mass of all the observed glitchers that have displayed at least two large events. Our procedure will allow current and future observations of glitching pulsars to constrain not only the physics of glitch models but also the superfluid properties of dense hadronic matter in neutron star interiors.

  14. One repetition maximum bench press performance: a new approach for its evaluation in inexperienced males and females: a pilot study.

    Science.gov (United States)

    Bianco, Antonino; Filingeri, Davide; Paoli, Antonio; Palma, Antonio

    2015-04-01

    The aim of this study was to evaluate a new method to perform the one repetition maximum (1RM) bench press test, by combining previously validated predictive and practical procedures. Eight young male and 7 females participants, with no previous experience of resistance training, performed a first set of repetitions to fatigue (RTF) with a workload corresponding to ⅓ of their body mass (BM) for a maximum of 25 repetitions. Following a 5-min recovery period, a second set of RTF was performed with a workload corresponding to ½ of participants' BM. The number of repetitions performed in this set was then used to predict the workload to be used for the 1RM bench press test using Mayhew's equation. Oxygen consumption, heart rate and blood lactate were monitored before, during and after each 1RM attempt. A significant effect of gender was found on the maximum number of repetitions achieved during the RTF set performed with ½ of participants' BM (males: 25.0 ± 6.3; females: 11.0x± 10.6; t = 6.2; p bench press test. We conclude that, by combining previously validated predictive equations with practical procedures (i.e. using a fraction of participants' BM to determine the workload for an RTF set), the new method we tested appeared safe, accurate (particularly in females) and time-effective in the practical evaluation of 1RM performance in inexperienced individuals. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Environmental conditions of interstadial (MIS 3 and features of the last glacial maximum on the King George island (West Antarctica

    Directory of Open Access Journals (Sweden)

    S. R. Verkulich

    2013-01-01

    Full Text Available The interstadial marine deposits stratum was described in the Fildes Peninsula (King George Island due to field and laboratory investigations during 2008–2011. The stratum fragments occur in the west and north-west parts of peninsula in following forms: sections of soft sediments, containing fossil shells, marine algae, bones of marine animals and rich marine diatom complexes in situ (11 sites; fragments of shells and bones on the surface (25 sites. According to the results of radiocarbon dating, these deposits were accumulated within the period 19–50 ky BP. Geographical and altitude settings of the sites, age characteristics, taxonomy of fossil flora and fauna, and good safety of the soft deposits stratum allow to make following conclusions: during interstadial, sea water covered significant part of King George Island up to the present altitude of 40 m a.s.l., and the King George Island glaciation had smaller size then; environmental conditions for the interstadial deposit stratum accumulation were at least not colder than today; probably, the King George island territory was covered entirely by ice masses of Last glacial maximum not earlier than 19 ky BP; during Last glacial maximum, King George Island was covered by thin, «cold», not mobile glaciers, which contribute to conservation of the soft marine interstadial deposits filled with fossil flora and fauna.

  16. Family size and effective population size in a hatchery stock of coho salmon (Oncorhynchus kisutch)

    Science.gov (United States)

    Simon, R.C.; McIntyre, J.D.; Hemmingsen, A.R.

    1986-01-01

    Means and variances of family size measured in five year-classes of wire-tagged coho salmon (Oncorhynchus kisutch) were linearly related. Population effective size was calculated by using estimated means and variances of family size in a 25-yr data set. Although numbers of age 3 adults returning to the hatchery appeared to be large enough to avoid inbreeding problems (the 25-yr mean exceeded 4500), the numbers actually contributing to the hatchery production may be too low. Several strategies are proposed to correct the problem perceived. Argument is given to support the contention that the problem of effective size is fairly general and is not confined to the present study population.

  17. Productivity response of calcareous nannoplankton to Eocene Thermal Maximum 2 (ETM2

    Directory of Open Access Journals (Sweden)

    M. Dedert

    2012-05-01

    Full Text Available The Early Eocene Thermal Maximum 2 (ETM2 at ~53.7 Ma is one of multiple hyperthermal events that followed the Paleocene-Eocene Thermal Maximum (PETM, ~56 Ma. The negative carbon excursion and deep ocean carbonate dissolution which occurred during the event imply that a substantial amount (103 Gt of carbon (C was added to the ocean-atmosphere system, consequently increasing atmospheric CO2(pCO2. This makes the event relevant to the current scenario of anthropogenic CO2 additions and global change. Resulting changes in ocean stratification and pH, as well as changes in exogenic cycles which supply nutrients to the ocean, may have affected the productivity of marine phytoplankton, especially calcifying phytoplankton. Changes in productivity, in turn, may affect the rate of sequestration of excess CO2 in the deep ocean and sediments. In order to reconstruct the productivity response by calcareous nannoplankton to ETM2 in the South Atlantic (Site 1265 and North Pacific (Site 1209, we employ the coccolith Sr/Ca productivity proxy with analysis of well-preserved picked monogeneric populations by ion probe supplemented by analysis of various size fractions of nannofossil sediments by ICP-AES. The former technique of measuring Sr/Ca in selected nannofossil populations using the ion probe circumvents possible contamination with secondary calcite. Avoiding such contamination is important for an accurate interpretation of the nannoplankton productivity record, since diagenetic processes can bias the productivity signal, as we demonstrate for Sr/Ca measurements in the fine (<20 μm and other size fractions obtained from bulk sediments from Site 1265. At this site, the paleoproductivity signal as reconstructed from the Sr/Ca appears to be governed by cyclic changes, possibly orbital forcing, resulting in a 20–30% variability in Sr/Ca in dominant genera as obtained by ion probe. The ~13 to 21

  18. Overcoming Barriers in Unhealthy Settings

    Directory of Open Access Journals (Sweden)

    Michael K. Lemke

    2016-03-01

    Full Text Available We investigated the phenomenon of sustained health-supportive behaviors among long-haul commercial truck drivers, who belong to an occupational segment with extreme health disparities. With a focus on setting-level factors, this study sought to discover ways in which individuals exhibit resiliency while immersed in endemically obesogenic environments, as well as understand setting-level barriers to engaging in health-supportive behaviors. Using a transcendental phenomenological research design, 12 long-haul truck drivers who met screening criteria were selected using purposeful maximum sampling. Seven broad themes were identified: access to health resources, barriers to health behaviors, recommended alternative settings, constituents of health behavior, motivation for health behaviors, attitude toward health behaviors, and trucking culture. We suggest applying ecological theories of health behavior and settings approaches to improve driver health. We also propose the Integrative and Dynamic Healthy Commercial Driving (IDHCD paradigm, grounded in complexity science, as a new theoretical framework for improving driver health outcomes.

  19. Real time estimation of photovoltaic modules characteristics and its application to maximum power point operation

    Energy Technology Data Exchange (ETDEWEB)

    Garrigos, Ausias; Blanes, Jose M.; Carrasco, Jose A. [Area de Tecnologia Electronica, Universidad Miguel Hernandez de Elche, Avda. de la Universidad s/n, 03202 Elche, Alicante (Spain); Ejea, Juan B. [Departamento de Ingenieria Electronica, Universidad de Valencia, Avda. Dr Moliner 50, 46100 Valencia, Valencia (Spain)

    2007-05-15

    In this paper, an approximate curve fitting method for photovoltaic modules is presented. The operation is based on solving a simple solar cell electrical model by a microcontroller in real time. Only four voltage and current coordinates are needed to obtain the solar module parameters and set its operation at maximum power in any conditions of illumination and temperature. Despite its simplicity, this method is suitable for low cost real time applications, as control loop reference generator in photovoltaic maximum power point circuits. The theory that supports the estimator together with simulations and experimental results are presented. (author)

  20. Tick size and stock returns

    Science.gov (United States)

    Onnela, Jukka-Pekka; Töyli, Juuso; Kaski, Kimmo

    2009-02-01

    Tick size is an important aspect of the micro-structural level organization of financial markets. It is the smallest institutionally allowed price increment, has a direct bearing on the bid-ask spread, influences the strategy of trading order placement in electronic markets, affects the price formation mechanism, and appears to be related to the long-term memory of volatility clustering. In this paper we investigate the impact of tick size on stock returns. We start with a simple simulation to demonstrate how continuous returns become distorted after confining the price to a discrete grid governed by the tick size. We then move on to a novel experimental set-up that combines decimalization pilot programs and cross-listed stocks in New York and Toronto. This allows us to observe a set of stocks traded simultaneously under two different ticks while holding all security-specific characteristics fixed. We then study the normality of the return distributions and carry out fits to the chosen distribution models. Our empirical findings are somewhat mixed and in some cases appear to challenge the simulation results.

  1. Linear Time Local Approximation Algorithm for Maximum Stable Marriage

    Directory of Open Access Journals (Sweden)

    Zoltán Király

    2013-08-01

    Full Text Available We consider a two-sided market under incomplete preference lists with ties, where the goal is to find a maximum size stable matching. The problem is APX-hard, and a 3/2-approximation was given by McDermid [1]. This algorithm has a non-linear running time, and, more importantly needs global knowledge of all preference lists. We present a very natural, economically reasonable, local, linear time algorithm with the same ratio, using some ideas of Paluch [2]. In this algorithm every person make decisions using only their own list, and some information asked from members of these lists (as in the case of the famous algorithm of Gale and Shapley. Some consequences to the Hospitals/Residents problem are also discussed.

  2. SU-F-T-628: An Evaluation of Grid Size in Eclipse AcurosXB Dose Calculation Algorithm for SBRT Lung

    Energy Technology Data Exchange (ETDEWEB)

    Pokharel, S [21st Century Oncology, Naples, FL (United States); Rana, S [McLaren Proton Therapy Center, Karmanos Cancer Institute at McLaren-Flint, Flint, MI (United States)

    2016-06-15

    Purpose: purpose of this study is to evaluate the effect of grid size in Eclipse AcurosXB dose calculation algorithm for SBRT lung. Methods: Five cases of SBRT lung previously treated have been chosen for present study. Four of the plans were 5 fields conventional IMRT and one was Rapid Arc plan. All five cases have been calculated with five grid sizes (1, 1.5, 2, 2.5 and 3mm) available for AXB algorithm with same plan normalization. Dosimetric indices relevant to SBRT along with MUs and time have been recorded for different grid sizes. The maximum difference was calculated as a percentage of mean of all five values. All the plans were IMRT QAed with portal dosimetry. Results: The maximum difference of MUs was within 2%. The time increased was as high as 7 times from highest 3mm to lowest 1mm grid size. The largest difference of PTV minimum, maximum and mean dose were 7.7%, 1.5% and 1.6% respectively. The highest D2-Max difference was 6.1%. The highest difference in ipsilateral lung mean, V5Gy, V10Gy and V20Gy were 2.6%, 2.4%, 1.9% and 3.8% respectively. The maximum difference of heart, cord and esophagus dose were 6.5%, 7.8% and 4.02% respectively. The IMRT Gamma passing rate at 2%/2mm remains within 1.5% with at least 98% points passing with all grid sizes. Conclusion: This work indicates the lowest grid size of 1mm available in AXB is not necessarily required for accurate dose calculation. The IMRT passing rate was insignificant or not observed with the reduction of grid size less than 2mm. Although the maximum percentage difference of some of the dosimetric indices appear large, most of them are clinically insignificant in absolute dose values. So we conclude that 2mm grid size calculation is best compromise in light of dose calculation accuracy and time it takes to calculate dose.

  3. Vessel size measurements in angiograms: Manual measurements

    International Nuclear Information System (INIS)

    Hoffmann, Kenneth R.; Dmochowski, Jacek; Nazareth, Daryl P.; Miskolczi, Laszlo; Nemes, Balazs; Gopal, Anant; Wang Zhou; Rudin, Stephen; Bednarek, Daniel R.

    2003-01-01

    Vessel size measurement is perhaps the most often performed quantitative analysis in diagnostic and interventional angiography. Although automated vessel sizing techniques are generally considered to have good accuracy and precision, we have observed that clinicians rarely use these techniques in standard clinical practice, choosing to indicate the edges of vessels and catheters to determine sizes and calibrate magnifications, i.e., manual measurements. Thus, we undertook an investigation of the accuracy and precision of vessel sizes calculated from manually indicated edges of vessels. Manual measurements were performed by three neuroradiologists and three physicists. Vessel sizes ranged from 0.1-3.0 mm in simulation studies and 0.3-6.4 mm in phantom studies. Simulation resolution functions had full-widths-at-half-maximum (FWHM) ranging from 0.0 to 0.5 mm. Phantom studies were performed with 4.5 in., 6 in., 9 in., and 12 in. image intensifier modes, magnification factor = 1, with and without zooming. The accuracy and reproducibility of the measurements ranged from 0.1 to 0.2 mm, depending on vessel size, resolution, and pixel size, and zoom. These results indicate that manual measurements may have accuracies comparable to automated techniques for vessels with sizes greater than 1 mm, but that automated techniques which take into account the resolution function should be used for vessels with sizes smaller than 1 mm

  4. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  5. Correlation Between Echinoidea Size and Threat Level

    Science.gov (United States)

    Bakshi, S.; Lee, A.; Heim, N.; Payne, J.

    2017-12-01

    Echinoidea (or sea urchins), are small, spiny, globular, animals that populate the seafloors of nearly the entire planet. Echinoidea have existed on Earth since the Ordovician period, and from their archaic origin there is much to be learned about the relationship between Echinoidea body size and how it affects the survivability of the individual. The goal of this project is to determine how Echinoidea dimensions such as body volume, area, and length compare across extinct and extant species by plotting Echinoidea data in R. We will use stratigraphic data as a source to find which species of sea urchin from our data is extinct. We will then create three sets of three histograms of the size data for each type of measurement. One set will include histograms for sea urchin length, area, and volume. The other set will include histograms for extinct sea urchin length, area, and volume. The last set will include histograms for extant sea urchin length, area, and volume. Our data showed that extant sea urchins had a larger size, and extinct sea urchins were smaller. Our length data showed that the average length of all sea urchins were 54.95791 mm, the average length of extinct sea urchins were 51.0337 mm, and the average length of extant sea urchins were 66.12774 mm. There is a generally increasing trend of size over time, except for a small outlier about 350 million years ago, where echinoderm extinction selected towards larger species and biovolume was abnormally high. Our data also showed that over the past 200 million years, echinoderm extinction selectivity drove slightly smaller sea urchins towards extinction, further supporting the idea that a larger size was and still is advantageous for echinoderms.

  6. Determining Changes in Electromyography Indices when Measuring Maximum Acceptable Weight of Lift in Iranian Male Students.

    Science.gov (United States)

    Salehi Sahl Abadi, A; Mazloumi, A; Nasl Saraji, G; Zeraati, H; Hadian, M R; Jafari, A H

    2018-03-01

    In spite of the increasing degree of automation in industry, manual material handling (MMH) is still performed in many occupational settings. The aim of the current study was to determine the maximum acceptable weight of lift using psychophysical and electromyography indices. This experimental study was conducted among 15 male students recruited from Tehran University of Medical Sciences. Each participant performed 18 different lifting tasks which involved three lifting frequencies, three lifting heights and two box sizes. Each set of experiments was conducted during the 20 min work period using free-style lifting technique and subjective as well as objective assessment methodologies. SPSS version 18 software was used for descriptive and analytical analyses by Friedman, Wilcoxon and Spearman correlation techniques. The results demonstrated that muscle activity increased with increasing frequency, height of lift and box size (P<0.05). Meanwhile, MAWLs obtained in this study are lower than those in Snook table (P<0.05). In this study, the level of muscle activity in percent MVC in relation to the erector spine muscles in L3 and T9 regions as well as left and right abdominal external oblique muscles were at 38.89%, 27.78%, 11.11% and 5.55% in terms of muscle activity is more than 70% MVC, respectively. The results of Wilcoxon test revealed that for both small and large boxes under all conditions, significant differences were detected between the beginning and end of the test values for MPF of erector spine in L3 and T9 regions, and left and right abdominal external oblique muscles (P<0.05). The results of Spearman correlation test showed that there was a significant relation between the MAWL, RMS and MPF of the muscles in all test conditions (P<0.05). Based on the results of this study, it was concluded if muscle activity is more than 70% of MVC, the values of Snook tables should be revisited. Furthermore, the biomechanical perspective should receive special attention

  7. Validation of calculated tissue maximum ratio obtained from measured percentage depth dose (PPD) data for high energy photon beam ( 6 MV and 15 MV)

    International Nuclear Information System (INIS)

    Osei, J.E.

    2014-07-01

    During external beam radiotherapy treatments, high doses are delivered to the cancerous cell. Accuracy and precision of dose delivery are primary requirements for effective and efficiency in treatment. This leads to the consideration of treatment parameters such as percentage depth dose (PDD), tissue air ratio (TAR) and tissue phantom ratio (TPR), which show the dose distribution in the patient. Nevertheless, tissue air ratio (TAR) for treatment time calculation, calls for the need to measure in-air-dose rate. For lower energies, measurement is not a problem but for higher energies, in-air measurement is not attainable due to the large build-up material required for the measurement. Tissue maximum ratio (TMR) is the quantity required to replace tissue air ratio (TAR) for high energy photon beam. It is known that tissue maximum ratio (TMR) is an important dosimetric function in radiotherapy treatment. As the calculation methods used to determine tissue maximum ratio (TMR) from percentage depth dose (PDD) were derived by considering the differences between TMR and PDD such as geometry and field size, where phantom scatter or peak scatter factors are used to correct dosimetric variation due to field size difference. The purpose of this study is to examine the accuracy of calculated tissue maximum ratio (TMR) data with measured TMR values for 6 MV and 15 MV photon beam at Sweden Ghana Medical Centre. With the help of the Blue motorize water phantom and the Omni pro-Accept software, Pdd values from which TMRs are calculated were measured at 100 cm source-to-surface distance (SSD) for various square field sizes from 5x5 cm to 40x40 cm and depth of 1.5 cm to 25 cm for 6 MV and 15 MV x-ray beam. With the same field sizes, depths and energies, the TMR values were measured. The validity of the calculated data was determined by making a comparison with values measured experimentally at some selected field sizes and depths. The results show that; the reference depth of maximum

  8. Multi-Objective Evaluation of Target Sets for Logistics Networks

    National Research Council Canada - National Science Library

    Emslie, Paul

    2000-01-01

    .... In the presence of many objectives--such as reducing maximum flow, lengthening routes, avoiding collateral damage, all at minimal risk to our pilots--the problem of determining the best target set is complex...

  9. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  10. Variation in age and size in Fennoscandian three-spined sticklebacks (Gasterosteus aculeatus).

    Science.gov (United States)

    DeFaveri, Jacquelin; Merilä, Juha

    2013-01-01

    Average age and maximum life span of breeding adult three-spined sticklebacks (Gasterosteus aculeatus) were determined in eight Fennoscandian localities with the aid of skeletochronology. The average age varied from 1.8 to 3.6 years, and maximum life span from three to six years depending on the locality. On average, fish from marine populations were significantly older than those from freshwater populations, but variation within habitat types was large. We also found significant differences in mean body size among different habitat types and populations, but only the population differences remained significant after accounting for variation due to age effects. These results show that generation length and longevity in three-spined sticklebacks can vary significantly from one locality to another, and that population differences in mean body size cannot be explained as a simple consequence of differences in population age structure. We also describe a nanistic population from northern Finland exhibiting long life span and small body size.

  11. Variation in age and size in Fennoscandian three-spined sticklebacks (Gasterosteus aculeatus.

    Directory of Open Access Journals (Sweden)

    Jacquelin DeFaveri

    Full Text Available Average age and maximum life span of breeding adult three-spined sticklebacks (Gasterosteus aculeatus were determined in eight Fennoscandian localities with the aid of skeletochronology. The average age varied from 1.8 to 3.6 years, and maximum life span from three to six years depending on the locality. On average, fish from marine populations were significantly older than those from freshwater populations, but variation within habitat types was large. We also found significant differences in mean body size among different habitat types and populations, but only the population differences remained significant after accounting for variation due to age effects. These results show that generation length and longevity in three-spined sticklebacks can vary significantly from one locality to another, and that population differences in mean body size cannot be explained as a simple consequence of differences in population age structure. We also describe a nanistic population from northern Finland exhibiting long life span and small body size.

  12. 19 mm sized bileaflet valve prostheses' flow field investigated by bidimensional laser Doppler anemometry (part I: velocity profiles).

    Science.gov (United States)

    Barbaro, V; Grigioni, M; Daniele, C; D'Avenio, G; Boccanera, G

    1997-11-01

    The investigation of the flow field downstream of a cardiac valve prosthesis is a well established task. In particular turbulence generation is of interest if damage to blood constituents is to be assessed. Several prosthetic valve flow studies are available in literature but they generally concern large-sized prostheses. The FDA draft guidance requires the study of the maximum Reynolds number conditions for a cardiac valve model to assess the worst case in turbulence by choosing both the minimum valve diameter and a high cardiac output value as protocol set up. Within the framework of a national research project regarding the characterization of cardiovascular endoprostheses, the Laboratory of Biomedical Engineering is currently conducting an in-depth study of turbulence generated downstream of bileaflet cardiac valves. Four models of 19 mm sized bileaflet valve prostheses, namely St Jude Medical HP Edwards Tekna, Sorin Bicarbon, and CarboMedics, were studied in aortic position. The prostheses were selected for the nominal annulus diameter reported by the manufacturers without any assessment of the valve sizing method. The hemodynamic function was investigated using a bidimensional LDA system. Results concern velocity profiles during the peak flow systolic phase, at high cardiac output regime, highlighting the different flow field features downstream of the four small-sized cardiac valves.

  13. Determination of Maximum Follow-up Speed of Electrode System of Resistance Projection Welders

    DEFF Research Database (Denmark)

    Wu, Pei; Zhang, Wenqi; Bay, Niels

    2004-01-01

    the weld process settings for the stable production and high quality of products. In this paper, the maximum follow-up speed of electrode system was tested by using a special designed device which can be mounted to all types of machine and easily to be applied in industry, the corresponding mathematical...... expression was derived based on a mathematical model. Good accordance was found between test and model....

  14. Prediction of the maximum absorption wavelength of azobenzene dyes by QSPR tools

    Science.gov (United States)

    Xu, Xuan; Luan, Feng; Liu, Huitao; Cheng, Jianbo; Zhang, Xiaoyun

    2011-12-01

    The maximum absorption wavelength ( λmax) of a large data set of 191 azobenzene dyes was predicted by quantitative structure-property relationship (QSPR) tools. The λmax was correlated with the 4 molecular descriptors calculated from the structure of the dyes alone. The multiple linear regression method (MLR) and the non-linear radial basis function neural network (RBFNN) method were applied to develop the models. The statistical parameters provided by the MLR model were R2 = 0.893, Radj2=0.893, qLOO2=0.884, F = 1214.871, RMS = 11.6430 for the training set; and R2 = 0.849, Radj2=0.845, qext2=0.846, F = 207.812, RMS = 14.0919 for the external test set. The RBFNN model gave even improved statistical results: R2 = 0.920, Radj2=0.919, qLOO2=0.898, F = 1664.074, RMS = 9.9215 for the training set, and R2 = 0.895, Radj2=0.892, qext2=0.895, F = 314.256, RMS = 11.6427 for the external test set. This theoretical method provides a simple, precise and an alternative method to obtain λmax of azobenzene dyes.

  15. Sequential and Parallel Algorithms for Finding a Maximum Convex Polygon

    DEFF Research Database (Denmark)

    Fischer, Paul

    1997-01-01

    This paper investigates the problem where one is given a finite set of n points in the plane each of which is labeled either ?positive? or ?negative?. We consider bounded convex polygons, the vertices of which are positive points and which do not contain any negative point. It is shown how...... such a polygon which is maximal with respect to area can be found in time O(n³ log n). With the same running time one can also find such a polygon which contains a maximum number of positive points. If, in addition, the number of vertices of the polygon is restricted to be at most M, then the running time...... becomes O(M n³ log n). It is also shown how to find a maximum convex polygon which contains a given point in time O(n³ log n). Two parallel algorithms for the basic problem are also presented. The first one runs in time O(n log n) using O(n²) processors, the second one has polylogarithmic time but needs O...

  16. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  17. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  18. A scalable method for identifying frequent subtrees in sets of large phylogenetic trees.

    Science.gov (United States)

    Ramu, Avinash; Kahveci, Tamer; Burleigh, J Gordon

    2012-10-03

    We consider the problem of finding the maximum frequent agreement subtrees (MFASTs) in a collection of phylogenetic trees. Existing methods for this problem often do not scale beyond datasets with around 100 taxa. Our goal is to address this problem for datasets with over a thousand taxa and hundreds of trees. We develop a heuristic solution that aims to find MFASTs in sets of many, large phylogenetic trees. Our method works in multiple phases. In the first phase, it identifies small candidate subtrees from the set of input trees which serve as the seeds of larger subtrees. In the second phase, it combines these small seeds to build larger candidate MFASTs. In the final phase, it performs a post-processing step that ensures that we find a frequent agreement subtree that is not contained in a larger frequent agreement subtree. We demonstrate that this heuristic can easily handle data sets with 1000 taxa, greatly extending the estimation of MFASTs beyond current methods. Although this heuristic does not guarantee to find all MFASTs or the largest MFAST, it found the MFAST in all of our synthetic datasets where we could verify the correctness of the result. It also performed well on large empirical data sets. Its performance is robust to the number and size of the input trees. Overall, this method provides a simple and fast way to identify strongly supported subtrees within large phylogenetic hypotheses.

  19. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  20. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  1. Connection between the growth rate distribution and the size dependent crystal growth

    Science.gov (United States)

    Mitrović, M. M.; Žekić, A. A.; IIić, Z. Z.

    2002-07-01

    The results of investigations of the connection between the growth rate dispersions and the size dependent crystal growth of potassium dihydrogen phosphate (KDP), Rochelle salt (RS) and sodium chlorate (SC) are presented. A possible way out of the existing confusion in the size dependent crystal growth investigations is suggested. It is shown that the size independent growth exists if the crystals belonging to one growth rate distribution maximum are considered separately. The investigations suggest possible reason for the observed distribution maxima widths, and the high data scattering on the growth rate versus the crystal size dependence.

  2. Size ratio correlates with intracranial aneurysm rupture status: a prospective study.

    Science.gov (United States)

    Rahman, Maryam; Smietana, Janel; Hauck, Erik; Hoh, Brian; Hopkins, Nick; Siddiqui, Adnan; Levy, Elad I; Meng, Hui; Mocco, J

    2010-05-01

    The prediction of intracranial aneurysm (IA) rupture risk has generated significant controversy. The findings of the International Study of Unruptured Intracranial Aneurysms (ISUIA) that small anterior circulation aneurysms (IAs are small. These discrepancies have led to the search for better aneurysm parameters to predict rupture. We previously reported that size ratio (SR), IA size divided by parent vessel diameter, correlated strongly with IA rupture status (ruptured versus unruptured). These data were all collected retrospectively off 3-dimensional angiographic images. Therefore, we performed a blinded prospective collection and evaluation of SR data from 2-dimensional angiographic images for a consecutive series of patients with ruptured and unruptured IAs. We prospectively enrolled 40 consecutive patients presenting to a single institution with either ruptured IA or for first-time evaluation of an incidental IA. Blinded technologists acquired all measurements from 2-dimensional angiographic images. Aneurysm rupture status, location, IA maximum size, and parent vessel diameter were documented. The SR was calculated by dividing the aneurysm size (mm) by the average parent vessel size (mm). A 2-tailed Mann-Whitney test was performed to assess statistical significance between ruptured and unruptured groups. Fisher exact test was used to compare medical comorbidities between the ruptured and unruptured groups. Significant differences between the 2 groups were subsequently tested with logistic regression. SE and probability values are reported. Forty consecutive patients with 24 unruptured and 16 ruptured aneurysms met the inclusion criteria. No significant differences were found in age, gender, smoking status, or medical comorbidities between ruptured and unruptured groups. The average maximum size of the unruptured IAs (6.18 + or - 0.60 mm) was significantly smaller compared with the ruptured IAs (7.91 + or - 0.47 mm; P=0.03), and the unruptured group had

  3. Size effect in barium titanate powders synthesized by different hydrothermal methods

    International Nuclear Information System (INIS)

    Sun Weian

    2006-01-01

    The size effect in barium titanate (BaTiO 3 ) was investigated both experimentally and theoretically. Tetragonal BaTiO 3 powders with average sizes from 80 to 420 nm were directly prepared by different hydrothermal methods. The tetragonality of the hydrothermal BaTiO 3 decreased with decreasing particle size, which exhibited a dependence on the synthesis method. A phenomenological model for the size effect was proposed to interpret the experimental observations. The influence of the defects, mainly the lattice hydroxyl, on the size effect was investigated to understand the correlation between the size effect and synthesis condition. The permittivities of BaTiO 3 powder at different particle sizes were calculated, which predicted a maximum permittivity of over 16 000 around the room-temperature critical size of ∼70 nm. The prediction was in good accordance with the experimental data reported recently

  4. Measuring conflict and power in strategic settings

    OpenAIRE

    Giovanni Rossi

    2009-01-01

    This is a quantitative approach to measuring conflict and power in strategic settings: noncooperative games (with cardinal or ordinal utilities) and blockings (without any preference specification). A (0, 1)-ranged index is provided, taking its minimum on common interest games, and its maximum on a newly introduced class termed “full conflict” games.

  5. Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.

    Science.gov (United States)

    Farsani, Zahra Amini; Schmid, Volker J

    2017-01-01

    In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.

  6. Reconstructing relative genome size of vascular plants through geological time.

    Science.gov (United States)

    Lomax, Barry H; Hilton, Jason; Bateman, Richard M; Upchurch, Garland R; Lake, Janice A; Leitch, Ilia J; Cromwell, Avery; Knight, Charles A

    2014-01-01

    The strong positive relationship evident between cell and genome size in both animals and plants forms the basis of using the size of stomatal guard cells as a proxy to track changes in plant genome size through geological time. We report for the first time a taxonomic fine-scale investigation into changes in stomatal guard-cell length and use these data to infer changes in genome size through the evolutionary history of land plants. Our data suggest that many of the earliest land plants had exceptionally large genome sizes and that a predicted overall trend of increasing genome size within individual lineages through geological time is not supported. However, maximum genome size steadily increases from the Mississippian (c. 360 million yr ago (Ma)) to the present. We hypothesise that the functional relationship between stomatal size, genome size and atmospheric CO2 may contribute to the dichotomy reported between preferential extinction of neopolyploids and the prevalence of palaeopolyploidy observed in DNA sequence data of extant vascular plants. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  7. Methodological basis for the optimization of a marine sea-urchin embryo test (SET) for the ecological assessment of coastal water quality.

    Science.gov (United States)

    Saco-Alvarez, Liliana; Durán, Iria; Ignacio Lorenzo, J; Beiras, Ricardo

    2010-05-01

    The sea-urchin embryo test (SET) has been frequently used as a rapid, sensitive, and cost-effective biological tool for marine monitoring worldwide, but the selection of a sensitive, objective, and automatically readable endpoint, a stricter quality control to guarantee optimum handling and biological material, and the identification of confounding factors that interfere with the response have hampered its widespread routine use. Size increase in a minimum of n=30 individuals per replicate, either normal larvae or earlier developmental stages, was preferred to observer-dependent, discontinuous responses as test endpoint. Control size increase after 48 h incubation at 20 degrees C must meet an acceptability criterion of 218 microm. In order to avoid false positives minimums of 32 per thousand salinity, 7 pH and 2mg/L oxygen, and a maximum of 40 microg/L NH(3) (NOEC) are required in the incubation media. For in situ testing size increase rates must be corrected on a degree-day basis using 12 degrees C as the developmental threshold. Copyright 2010 Elsevier Inc. All rights reserved.

  8. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  9. Achieving temperature-size changes in a unicellular organism

    Science.gov (United States)

    Forster, Jack; Hirst, Andrew G; Esteban, Genoveva F

    2013-01-01

    The temperature-size rule (TSR) is an intraspecific phenomenon describing the phenotypic plastic response of an organism size to the temperature: individuals reared at cooler temperatures mature to be larger adults than those reared at warmer temperatures. The TSR is ubiquitous, affecting >80% species including uni- and multicellular groups. How the TSR is established has received attention in multicellular organisms, but not in unicells. Further, conceptual models suggest the mechanism of size change to be different in these two groups. Here, we test these theories using the protist Cyclidium glaucoma. We measure cell sizes, along with population growth during temperature acclimation, to determine how and when the temperature-size changes are achieved. We show that mother and daughter sizes become temporarily decoupled from the ratio 2:1 during acclimation, but these return to their coupled state (where daughter cells are half the size of the mother cell) once acclimated. Thermal acclimation is rapid, being completed within approximately a single generation. Further, we examine the impact of increased temperatures on carrying capacity and total biomass, to investigate potential adaptive strategies of size change. We demonstrate no temperature effect on carrying capacity, but maximum supported biomass to decrease with increasing temperature. PMID:22832346

  10. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    Energy Technology Data Exchange (ETDEWEB)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au [School of Chemistry and Biochemistry, The University of Western Australia, Perth, WA 6009 (Australia)

    2015-05-15

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.

  11. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    International Nuclear Information System (INIS)

    Spackman, Peter R.; Karton, Amir

    2015-01-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L α two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol –1 . The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol –1

  12. Maximum principle for a stochastic delayed system involving terminal state constraints.

    Science.gov (United States)

    Wen, Jiaqiang; Shi, Yufeng

    2017-01-01

    We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.

  13. Performance of shear-wave elastography for breast masses using different region-of-interest (ROI) settings.

    Science.gov (United States)

    Youk, Ji Hyun; Son, Eun Ju; Han, Kyunghwa; Gweon, Hye Mi; Kim, Jeong-Ah

    2018-07-01

    Background Various size and shape of region of interest (ROI) can be applied for shear-wave elastography (SWE). Purpose To investigate the diagnostic performance of SWE according to ROI settings for breast masses. Material and Methods To measure elasticity for 142 lesions, ROIs were set as follows: circular ROIs 1 mm (ROI-1), 2 mm (ROI-2), and 3 mm (ROI-3) in diameter placed over the stiffest part of the mass; freehand ROIs drawn by tracing the border of mass (ROI-M) and the area of peritumoral increased stiffness (ROI-MR); and circular ROIs placed within the mass (ROI-C) and to encompass the area of peritumoral increased stiffness (ROI-CR). Mean (E mean ), maximum (E max ), and standard deviation (E SD ) of elasticity values and their areas under the receiver operating characteristic (ROC) curve (AUCs) for diagnostic performance were compared. Results Means of E mean and E SD significantly differed between ROI-1, ROI-2, and ROI-3 ( P Shear-wave elasticity values and their diagnostic performance vary based on ROI settings and elasticity indices. E max is recommended for the ROIs over the stiffest part of mass and an ROI encompassing the peritumoral area of increased stiffness is recommended for elastic heterogeneity of mass.

  14. EXTREME MAXIMUM AND MINIMUM AIR TEMPERATURE IN MEDİTERRANEAN COASTS IN TURKEY

    Directory of Open Access Journals (Sweden)

    Barbaros Gönençgil

    2016-01-01

    Full Text Available In this study, we determined extreme maximum and minimum temperatures in both summer and winter seasons at the stations in the Mediterranean coastal areas of Turkey.In the study, the data of 24 meteorological stations for the daily maximum and minimumtemperatures of the period from 1970–2010 were used. From this database, a set of four extreme temperature indices applied warm (TX90 and cold (TN10 days and warm spells (WSDI and cold spell duration (CSDI. The threshold values were calculated for each station to determine the temperatures that were above and below the seasonal norms in winter and summer. The TX90 index displays a positive statistically significant trend, while TN10 display negative nonsignificant trend. The occurrence of warm spells shows statistically significant increasing trend while the cold spells shows significantly decreasing trend over the Mediterranean coastline in Turkey.

  15. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  16. 12 CFR 1291.6 - Homeownership set-aside programs.

    Science.gov (United States)

    2010-01-01

    ... as part of a disaster relief effort. (3) Maximum grant amount. Members shall provide AHP direct... Section 1291.6 Banks and Banking FEDERAL HOUSING FINANCE AGENCY HOUSING GOALS AND MISSION FEDERAL HOME LOAN BANKS' AFFORDABLE HOUSING PROGRAM § 1291.6 Homeownership set-aside programs. (a) Establishment of...

  17. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  18. Construction of Pipelined Strategic Connected Dominating Set for Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Ceronmani Sharmila

    2016-06-01

    Full Text Available Efficient routing between nodes is the most important challenge in a Mobile Ad Hoc Network (MANET. A Connected Dominating Set (CDS acts as a virtual backbone for routing in a MANET. Hence, the construction of CDS based on the need and its application plays a vital role in the applications of MANET. The PipeLined Strategic CDS (PLS-CDS is constructed based on strategy, dynamic diameter and transmission range. The strategy used for selecting the starting node is, any source node in the network, which has its entire destination within a virtual pipelined coverage, instead of the node with maximum connectivity. The other nodes are then selected based on density and velocity. The proposed CDS also utilizes the energy of the nodes in the network in an optimized manner. Simulation results showed that the proposed algorithm is better in terms of size of the CDS and average hop per path length.

  19. Coexistence of structured populations with size-based prey selection

    DEFF Research Database (Denmark)

    Hartvig, Martin; Andersen, Ken Haste

    2013-01-01

    Abstract Species with a large adult-offspring size ratio and a preferred predator–prey mass ratio undergo ontogenetic trophic niche shift(s) throughout life. Trophic interactions between such species vary throughout life, resulting in different species-level interaction motifs depending on the ma......Abstract Species with a large adult-offspring size ratio and a preferred predator–prey mass ratio undergo ontogenetic trophic niche shift(s) throughout life. Trophic interactions between such species vary throughout life, resulting in different species-level interaction motifs depending...... on the maximum adult sizes and population size distributions. We explore the assembly and potential for coexistence of small communities where all species experience ontogenetic trophic niche shifts. The life-history of each species is described by a physiologically structured model and species identity...... there is a large scope for coexistence of two species, the scope for coexistence of three species is limited and we conclude that further trait differentiation is required for coexistence of more species-rich size-structured communities....

  20. An Experimental Observation of Axial Variation of Average Size of Methane Clusters in a Gas Jet

    International Nuclear Information System (INIS)

    Ji-Feng, Han; Chao-Wen, Yang; Jing-Wei, Miao; Jian-Feng, Lu; Meng, Liu; Xiao-Bing, Luo; Mian-Gong, Shi

    2010-01-01

    Axial variation of average size of methane clusters in a gas jet produced by supersonic expansion of methane through a cylindrical nozzle of 0.8 mm in diameter is observed using a Rayleigh scattering method. The scattered light intensity exhibits a power scaling on the backing pressure ranging from 16 to 50 bar, and the power is strongly Z dependent varying from 8.4 (Z = 3 mm) to 5.4 (Z = 11 mm), which is much larger than that of the argon cluster. The scattered light intensity versus axial position shows that the position of 5 mm has the maximum signal intensity. The estimation of the average cluster size on axial position Z indicates that the cluster growth process goes forward until the maximum average cluster size is reached at Z = 9 mm, and the average cluster size will decrease gradually for Z > 9 mm

  1. Size limits for rounding of volcanic ash particles heated by lightning

    Science.gov (United States)

    Wadsworth, Fabian B.; Vasseur, Jérémie; Llewellin, Edward W.; Genareau, Kimberly; Cimarelli, Corrado; Dingwell, Donald B.

    2017-03-01

    Volcanic ash particles can be remelted by the high temperatures induced in volcanic lightning discharges. The molten particles can round under surface tension then quench to produce glass spheres. Melting and rounding timescales for volcanic materials are strongly dependent on heating duration and peak temperature and are shorter for small particles than for large particles. Therefore, the size distribution of glass spheres recovered from ash deposits potentially record the short duration, high-temperature conditions of volcanic lightning discharges, which are hard to measure directly. We use a 1-D numerical solution to the heat equation to determine the timescales of heating and cooling of volcanic particles during and after rapid heating and compare these with the capillary timescale for rounding an angular particle. We define dimensionless parameters—capillary, Fourier, Stark, Biot, and Peclet numbers—to characterize the competition between heat transfer within the particle, heat transfer at the particle rim, and capillary motion, for particles of different sizes. We apply this framework to the lightning case and constrain a maximum size for ash particles susceptible to surface tension-driven rounding, as a function of lightning temperature and duration, and ash properties. The size limit agrees well with maximum sizes of glass spheres found in volcanic ash that has been subjected to lightning or experimental discharges, demonstrating that the approach that we develop can be used to obtain a first-order estimate of lightning conditions in volcanic plumes.

  2. D90: The Strongest Contributor to Setting Time in Mineral Trioxide Aggregate and Portland Cement.

    Science.gov (United States)

    Ha, William N; Bentz, Dale P; Kahler, Bill; Walsh, Laurence J

    2015-07-01

    The setting times of commercial mineral trioxide aggregate (MTA) and Portland cements vary. It was hypothesized that much of this variation was caused by differences in particle size distribution. Two gram samples from 11 MTA-type cements were analyzed by laser diffraction to determine their particle size distributions characterized by their percentile equivalent diameters (the 10th percentile, the median, and the 90th percentile [d90], respectively). Setting time data were received from manufacturers who performed indentation setting time tests as specified by the standards relevant to dentistry, ISO 6786 (9 respondents) or ISO 9917.1 (1 respondent), or not divulged to the authors (1 respondent). In a parallel experiment, 6 samples of different size graded Portland cements were produced using the same cement clinker. The measurement of setting time for Portland cement pastes was performed using American Society for Testing and Materials C 191. Cumulative heat release was measured using isothermal calorimetry to assess the reactions occurring during the setting of these pastes. In all experiments, linear correlations were assessed between setting times, heat release, and the 3 particle size parameters. Particle size varied considerably among MTA cements. For MTA cements, d90 was the particle size characteristic showing the highest positive linear correlation with setting time (r = 0.538). For Portland cement, d90 gave an even higher linear correlation for the initial setting time (r = 0.804) and the final setting time (r = 0.873) and exhibited a strong negative linear correlation for cumulative heat release (r = 0.901). Smaller particle sizes result in faster setting times, with d90 (the largest particles) being most closely correlated with the setting times of the samples. Copyright © 2015 American Association of Endodontists. All rights reserved.

  3. Constraints on the adult-offspring size relationship in protists.

    Science.gov (United States)

    Caval-Holme, Franklin; Payne, Jonathan; Skotheim, Jan M

    2013-12-01

    The relationship between adult and offspring size is an important aspect of reproductive strategy. Although this filial relationship has been extensively examined in plants and animals, we currently lack comparable data for protists, whose strategies may differ due to the distinct ecological and physiological constraints on single-celled organisms. Here, we report measurements of adult and offspring sizes in 3888 species and subspecies of foraminifera, a class of large marine protists. Foraminifera exhibit a wide range of reproductive strategies; species of similar adult size may have offspring whose sizes vary 100-fold. Yet, a robust pattern emerges. The minimum (5th percentile), median, and maximum (95th percentile) offspring sizes exhibit a consistent pattern of increase with adult size independent of environmental change and taxonomic variation over the past 400 million years. The consistency of this pattern may arise from evolutionary optimization of the offspring size-fecundity trade-off and/or from cell-biological constraints that limit the range of reproductive strategies available to single-celled organisms. When compared with plants and animals, foraminifera extend the evidence that offspring size covaries with adult size across an additional five orders of magnitude in organism size. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  4. Maximum entropy networks are more controllable than preferential attachment networks

    International Nuclear Information System (INIS)

    Hou, Lvlin; Small, Michael; Lao, Songyang

    2014-01-01

    A maximum entropy (ME) method to generate typical scale-free networks has been recently introduced. We investigate the controllability of ME networks and Barabási–Albert preferential attachment networks. Our experimental results show that ME networks are significantly more easily controlled than BA networks of the same size and the same degree distribution. Moreover, the control profiles are used to provide insight into control properties of both classes of network. We identify and classify the driver nodes and analyze the connectivity of their neighbors. We find that driver nodes in ME networks have fewer mutual neighbors and that their neighbors have lower average degree. We conclude that the properties of the neighbors of driver node sensitively affect the network controllability. Hence, subtle and important structural differences exist between BA networks and typical scale-free networks of the same degree distribution. - Highlights: • The controllability of maximum entropy (ME) and Barabási–Albert (BA) networks is investigated. • ME networks are significantly more easily controlled than BA networks of the same degree distribution. • The properties of the neighbors of driver node sensitively affect the network controllability. • Subtle and important structural differences exist between BA networks and typical scale-free networks

  5. Radiation pressure acceleration: The factors limiting maximum attainable ion energy

    Energy Technology Data Exchange (ETDEWEB)

    Bulanov, S. S.; Esarey, E.; Schroeder, C. B. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Bulanov, S. V. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); A. M. Prokhorov Institute of General Physics RAS, Moscow 119991 (Russian Federation); Esirkepov, T. Zh.; Kando, M. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); Pegoraro, F. [Physics Department, University of Pisa and Istituto Nazionale di Ottica, CNR, Pisa 56127 (Italy); Leemans, W. P. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Physics Department, University of California, Berkeley, California 94720 (United States)

    2016-05-15

    Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it transparent for radiation and effectively terminating the acceleration. The off-normal incidence of the laser on the target, due either to the experimental setup, or to the deformation of the target, will also lead to establishing a limit on maximum ion energy.

  6. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  7. Towards Optimal Buffer Size in Wi-Fi Networks

    KAUST Repository

    Showail, Ahmad J.

    2016-01-19

    Buffer sizing is an important network configuration parameter that impacts the quality of data traffic. Falling memory cost and the fallacy that ‘more is better’ lead to over provisioning network devices with large buffers. Over-buffering or the so called ‘bufferbloat’ phenomenon creates excessive end-to-end delay in today’s networks. On the other hand, under-buffering results in frequent packet loss and subsequent under-utilization of network resources. The buffer sizing problem has been studied extensively for wired networks. However, there is little work addressing the unique challenges of wireless environment. In this dissertation, we discuss buffer sizing challenges in wireless networks, classify the state-of-the-art solutions, and propose two novel buffer sizing schemes. The first scheme targets buffer sizing in wireless multi-hop networks where the radio spectral resource is shared among a set of con- tending nodes. Hence, it sizes the buffer collectively and distributes it over a set of interfering devices. The second buffer sizing scheme is designed to cope up with recent Wi-Fi enhancements. It adapts the buffer size based on measured link characteristics and network load. Also, it enforces limits on the buffer size to maximize frame aggregation benefits. Both mechanisms are evaluated using simulation as well as testbed implementation over half-duplex and full-duplex wireless networks. Experimental evaluation shows that our proposal reduces latency by an order of magnitude.

  8. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  9. How to design your stand-by diesel generator unit for maximum reliability

    International Nuclear Information System (INIS)

    Kauffmann, W.M.

    1979-01-01

    Critical stand-by power applications, such as in a nuclear plant, or radio support stations, demand exacting guidelines for positive start, rapid acceleration, load acceptance with minimum voltage drop, and quick recovery to rated voltage. The design of medium-speed turbocharged and intercooled diesel-engine-generator for this purpose is considered. Selection of the diesel engine, size, and number of units, from the standpoint of cost, favors minimum number of units with maximum horsepower capability. Four-cycle diesels are available in 16 to 20 cyinders V-configurations, with 200 BMEP (brake mean-effective pressure) continuous and 250 BMEP peaking

  10. Maximum power point tracker for portable photovoltaic systems with resistive-like load

    Energy Technology Data Exchange (ETDEWEB)

    De Cesare, G.; Caputo, D.; Nascetti, A. [Department of Electronic Engineering, University of Rome La Sapienza via Eudossiana, 18 00184 Rome (Italy)

    2006-08-15

    In this work we report on the design and realization of a maximum power point tracking (MPPT) circuit suitable for low power, portable applications with resistive load. The design rules included cost, size and power efficiency considerations. A novel scheme for the implementation of the control loop of the MPPT circuit is proposed, combining good performance with compact design. The operation and performances were simulated at circuit schematic level with simulation program with integrated circuit emphasis (SPICE). The improved operation of a PV system using our MPPT circuit was demonstrated using a purely resistive load. (author)

  11. Size, productivity, and international banking

    NARCIS (Netherlands)

    Buch, Claudia M.; Koch, Catherine T.; Koetter, Michael

    Heterogeneity in size and productivity is central to models that explain which manufacturing firms expert. This study presents descriptive evidence on similar heterogeneity among international banks as financial services providers. A novel and detailed bank-level data set reveals the volume and mode

  12. Implementation of a new maximum power point tracking control strategy for small wind energy conversion systems without mechanical sensors

    International Nuclear Information System (INIS)

    Daili, Yacine; Gaubert, Jean-Paul; Rahmani, Lazhar

    2015-01-01

    Highlights: • A new maximum power point tracking algorithm for small wind turbines is proposed. • This algorithm resolves the problems of the classical perturb and observe method. • The proposed method has been tested under several wind speed profiles. • The validity of the new algorithm has been confirmed by the experimental results. - Abstract: This paper proposes a modified perturbation and observation maximum power point tracking algorithm for small wind energy conversion systems to overcome the problems of the conventional perturbation and observation technique, namely rapidity/efficiency trade-off and the divergence from peak power under a fast variation of the wind speed. Two modes of operation are used by this algorithm, the normal perturbation and observation mode and the predictive mode. The normal perturbation and observation mode with small step-size is switched under a slow wind speed variation to track the true maximum power point with fewer fluctuations in steady state. When a rapid change of wind speed is detected, the algorithm tracks the new maximum power point in two phases: in the first stage, the algorithm switches to the predictive mode in which the step-size is auto-adjusted according to the distance between the operating point and the estimated optimum point to move the operating point near to the maximum power point rapidly, and then the normal perturbation and observation mode is used to track the true peak power in the second stage. The dc-link voltage variation is used to detect rapid wind changes. The proposed algorithm does not require either knowledge of system parameters or of mechanical sensors. The experimental results confirm that the proposed algorithm has a better performance in terms of dynamic response and efficiency compared with the conventional perturbation and observation algorithm

  13. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  14. Microeconomic principles explain an optimal genome size in bacteria.

    Science.gov (United States)

    Ranea, Juan A G; Grant, Alastair; Thornton, Janet M; Orengo, Christine A

    2005-01-01

    Bacteria can clearly enhance their survival by expanding their genetic repertoire. However, the tight packing of the bacterial genome and the fact that the most evolved species do not necessarily have the biggest genomes suggest there are other evolutionary factors limiting their genome expansion. To clarify these restrictions on size, we studied those protein families contributing most significantly to bacterial-genome complexity. We found that all bacteria apply the same basic and ancestral 'molecular technology' to optimize their reproductive efficiency. The same microeconomics principles that define the optimum size in a factory can also explain the existence of a statistical optimum in bacterial genome size. This optimum is reached when the bacterial genome obtains the maximum metabolic complexity (revenue) for minimal regulatory genes (logistic cost).

  15. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  16. Pristipomoides filamentosus Size at Maturity Study

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains information used to help determine median size at 50% maturity for the bottomfish species, Pristipomoides filamentosus in the Main Hawaiian...

  17. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    Science.gov (United States)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  18. Reference values of maximum walking speed among independent community-dwelling Danish adults aged 60 to 79 years

    DEFF Research Database (Denmark)

    Tibaek, S; Holmestad-Bechmann, N; Pedersen, Trine B

    2015-01-01

    OBJECTIVES: To establish reference values for maximum walking speed over 10m for independent community-dwelling Danish adults, aged 60 to 79 years, and to evaluate the effects of gender and age. DESIGN: Cross-sectional study. SETTING: Danish companies and senior citizens clubs. PARTICIPANTS: Two ...

  19. Modelling size-dependent cannibalism in barramundi Lates calcarifer: cannibalistic polyphenism and its implication to aquaculture.

    Directory of Open Access Journals (Sweden)

    Flavio F Ribeiro

    Full Text Available This study quantified size-dependent cannibalism in barramundi Lates calcarifer through coupling a range of prey-predator pairs in a different range of fish sizes. Predictive models were developed using morphological traits with the alterative assumption of cannibalistic polyphenism. Predictive models were validated with the data from trials where cannibals were challenged with progressing increments of prey sizes. The experimental observations showed that cannibals of 25-131 mm total length could ingest the conspecific prey of 78-72% cannibal length. In the validation test, all predictive models underestimate the maximum ingestible prey size for cannibals of a similar size range. However, the model based on the maximal mouth width at opening closely matched the empirical observations, suggesting a certain degree of phenotypic plasticity of mouth size among cannibalistic individuals. Mouth size showed allometric growth comparing with body depth, resulting in a decreasing trend on the maximum size of ingestible prey as cannibals grow larger, which in parts explains why cannibalism in barramundi is frequently observed in the early developmental stage. Any barramundi has the potential to become a cannibal when the initial prey size was 58% of their size, suggesting that 50% of size difference can be the threshold to initiate intracohort cannibalism in a barramundi population. Cannibalistic polyphenism was likely to occur in barramundi that had a cannibalistic history. An experienced cannibal would have a greater ability to stretch its mouth size to capture a much larger prey than the models predict. The awareness of cannibalistic polyphenism has important application in fish farming management to reduce cannibalism.

  20. Prediction of LOCA Break Size Using CFNN

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Geon Pil; Yoo, Kwae Hwan; Back, Ju Hyun; Kim, Dong Yeong; Na, Man Gyun [Chosun University Gwangju (Korea, Republic of)

    2016-05-15

    The NPPs have the emergency core cooling system (ECCS) such as a safety injection system. The ECCS may not function properly in case of the small break size due to a slight change of pressure in the pipe. If the coolant is not supplied by ECCS, the reactor core will melt. Therefore, the meltdown of reactor core have to be prevented by appropriate accident management through the prediction of LOCA break size in advance. This study presents the prediction of LOCA break size using cascaded fuzzy neural network (CFNN). The CFNN model repeatedly applies FNN modules that are serially connected. The CFNN model is a data-based method that requires data for its development and verification. The data were obtained by numerically simulating severe accident scenarios of the optimized power reactor (OPR1000) using MAAP code, because real severe accident data cannot be obtained from actual NPP accidents. The CFNN model has been designed to rapidly predict the LOCA break size in LOCA situations. The CFNN model was trained by using the training data set and checked by using test data set. These data sets were obtained using MAAP code for OPR1000 reactor. The performance results of the CFNN model show that the RMS error decreases as the stage number of the CFNN model increases. In addition, the performance result of the CFNN model presents that the RMS error level is below 4%.

  1. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    Science.gov (United States)

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  2. Market size and attendance in English Premier League football

    OpenAIRE

    Buraimo, B; Simmons, R

    2006-01-01

    This paper models the impacts of market size and team competition for fan base on matchday attendance in the English Premier League over the period 1997-2004 using a large panel data set. We construct a comprehensive set of control variables and use tobit estimation to overcome the problems caused by sell-out crowds. We also account for unobserved influences on attendance by means of random effects attached to home teams. Our treatment of market size, with its use of Geographical Information ...

  3. Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution

    Directory of Open Access Journals (Sweden)

    Hare Krishna

    2017-01-01

    Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.

  4. The Ultraviolet Spectrometer and Polarimeter on the Solar Maximum Mission

    Science.gov (United States)

    Woodgate, B. E.; Brandt, J. C.; Kalet, M. W.; Kenny, P. J.; Tandberg-Hanssen, E. A.; Bruner, E. C.; Beckers, J. M.; Henze, W.; Knox, E. D.; Hyder, C. L.

    1980-01-01

    The Ultraviolet Spectrometer and Polarimeter (UVSP) on the Solar Maximum Mission spacecraft is described, including the experiment objectives, system design, performance, and modes of operation. The instrument operates in the wavelength range 1150-3600 A with better than 2 arcsec spatial resolution, raster range 256 x 256 sq arcsec, and 20 mA spectral resolution in second order. Observations can be made with specific sets of four lines simultaneously, or with both sides of two lines simultaneously for velocity and polarization. A rotatable retarder can be inserted into the spectrometer beam for measurement of Zeeman splitting and linear polarization in the transition region and chromosphere.

  5. The ultraviolet spectrometer and polarimeter on the solar maximum mission

    International Nuclear Information System (INIS)

    Woodgate, B.E.; Brandt, J.C.; Kalet, M.W.; Kenny, P.J.; Beckers, J.M.; Henze, W.; Hyder, C.L.; Knox, E.D.

    1980-01-01

    The Ultraviolet Spectrometer and Polarimeter (UVSP) on the Solar Maximum Mission spacecraft is described, including the experiment objectives, system design. performance, and modes of operation. The instrument operates in the wavelength range 1150-3600 Angstreom with better than 2 arc sec spatial resolution, raster range 256 x 256 arc sec 2 , and 20 m Angstroem spectral resolution in second order. Observations can be made with specific sets of 4 lines simultaneously, or with both sides of 2 lines simultaneously for velocity and polarization. A rotatable retarder can be inserted into the spectrometer beam for measurement of Zeeman splitting and linear polarization in the transition region and chromosphere. (orig.)

  6. Full-sized plates irradiation with high UMo fuel loading. Final results of IRIS 1 experiment

    International Nuclear Information System (INIS)

    Huet, F.; Marelle, V.; Noirot, J.; Sacristan, P.; Lemoine, P.

    2003-01-01

    As a part of the French UMo Group qualification program, IRIS 1 experiment contained full-sized plates with high uranium loading in the meat of 8 g.cm -3 . The fuel particles consisted of 7 and 9 wt% Mo-uranium alloys ground powders. The plate were irradiated at OSIRIS reactor in IRIS device up to 67.5% peak burnup within the range of 136 W.cm - '2 for the heat flux and 72 deg. C for the cladding temperature. After each reactor cycle the plates thickness were measured. The results show no swelling behaviour differences versus burnup between UMo7 and UMo9 plates. The maximum plate swelling for peak burnup location remains lower than 6%. The wide set of PIE has shown that, within the studied irradiation conditions, the interaction product have a global formulation of '(U-Mo)Al -7 ' and that there is no aluminium dissolution in UMo particles. IRIS1 experiment, as the first step of the UMo fuel qualification for research reactor, has established the good behaviour of UMo7 and UMo9 high uranium loading full-sized plate within the tested conditions. (author)

  7. Installation of the MAXIMUM microscope at the ALS

    International Nuclear Information System (INIS)

    Ng, W.; Perera, R.C.C.; Underwood, J.H.; Singh, S.; Solak, H.; Cerrina, F.

    1995-10-01

    The MAXIMUM scanning x-ray microscope, developed at the Synchrotron Radiation Center (SRC) at the University of Wisconsin, Madison was implemented on the Advanced Light Source in August of 1995. The microscope's initial operation at SRC successfully demonstrated the use of multilayer coated Schwarzschild objective for focusing 130 eV x-rays to a spot size of better than 0.1 micron with an electron energy resolution of 250meV. The performance of the microscope was severely limited, because of the relatively low brightness of SRC, which limits the available flux at the focus of the microscope. The high brightness of the ALS is expected to increase the usable flux at the sample by a factor of 1,000. The authors will report on the installation of the microscope on bending magnet beamline 6.3.2 at the ALS and the initial measurement of optical performance on the new source, and preliminary experiments with surface chemistry of HF etched Si will be described

  8. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. The Stiles-Crawford Effect: spot-size ratio departure in retinitis pigmentosa

    Science.gov (United States)

    Sharma, Nachieketa K.; Lakshminarayanan, Vasudevan

    2016-04-01

    The Stiles-Crawford effect of the first kind is the retina's compensative response to loss of luminance efficiency for oblique stimulation manifested as the spot-size ratio departure from the perfect power coupling for a normal human eye. In a retinitis pigmentosa eye (RP), the normal cone photoreceptor morphology is affected due to foveal cone loss and disrupted cone mosaic spatial arrangement with reduction in directional sensitivity. We show that the flattened Stiles-Crawford function (SCF) in a RP eye is due to a different spot-size ratio departure profile, that is, for the same loss of luminance efficiency, a RP eye has a smaller departure from perfect power coupling compared to a normal eye. Again, the difference in spot-size ratio departure increases from the centre towards the periphery, having zero value for axial entry and maximum value for maximum peripheral entry indicating dispersal of photoreceptor alignment which prevents the retina to go for a bigger compensative response as it lacks both in number and appropriate cone morphology to tackle the loss of luminance efficiency for oblique stimulation. The slope of departure profile also testifies to the flattened SCF for a RP eye. Moreover, the discrepancy in spot-size ratio departure between a normal and a RP eye is shown to have a direct bearing on the Stiles-Crawford diminution of visibility.

  10. Sustainable Sizing.

    Science.gov (United States)

    Robinette, Kathleen M; Veitch, Daisy

    2016-08-01

    To provide a review of sustainable sizing practices that reduce waste, increase sales, and simultaneously produce safer, better fitting, accommodating products. Sustainable sizing involves a set of methods good for both the environment (sustainable environment) and business (sustainable business). Sustainable sizing methods reduce (1) materials used, (2) the number of sizes or adjustments, and (3) the amount of product unsold or marked down for sale. This reduces waste and cost. The methods can also increase sales by fitting more people in the target market and produce happier, loyal customers with better fitting products. This is a mini-review of methods that result in more sustainable sizing practices. It also reviews and contrasts current statistical and modeling practices that lead to poor fit and sizing. Fit-mapping and the use of cases are two excellent methods suited for creating sustainable sizing, when real people (vs. virtual people) are used. These methods are described and reviewed. Evidence presented supports the view that virtual fitting with simulated people and products is not yet effective. Fit-mapping and cases with real people and actual products result in good design and products that are fit for person, fit for purpose, with good accommodation and comfortable, optimized sizing. While virtual models have been shown to be ineffective for predicting or representing fit, there is an opportunity to improve them by adding fit-mapping data to the models. This will require saving fit data, product data, anthropometry, and demographics in a standardized manner. For this success to extend to the wider design community, the development of a standardized method of data collection for fit-mapping with a globally shared fit-map database is needed. It will enable the world community to build knowledge of fit and accommodation and generate effective virtual fitting for the future. A standardized method of data collection that tests products' fit methodically

  11. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    Science.gov (United States)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  12. Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds

    International Nuclear Information System (INIS)

    Kaya, Savaş; Kaya, Cemal; Islam, Nazmul

    2016-01-01

    The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.

  13. Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Kaya, Savaş, E-mail: savaskaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Kaya, Cemal, E-mail: kaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Islam, Nazmul, E-mail: nazmul.islam786@gmail.com [Theoretical and Computational Chemistry Research Laboratory, Department of Basic Science and Humanities/Chemistry Techno Global-Balurghat, Balurghat, D. Dinajpur 733103 (India)

    2016-03-15

    The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.

  14. Portion size: a qualitative study of consumers' attitudes toward point-of-purchase interventions aimed at portion size.

    Science.gov (United States)

    Vermeer, Willemijn M; Steenhuis, Ingrid H M; Seidell, Jacob C

    2010-02-01

    This qualitative study assessed consumers' opinions of food portion sizes and their attitudes toward portion-size interventions located in various point-of-purchase settings targeting overweight and obese people. Eight semi-structured focus group discussions were conducted with 49 participants. Constructs from the diffusion of innovations theory were included in the interview guide. Each focus group was recorded and transcribed verbatim. Data were coded and analyzed with Atlas.ti 5.2 using the framework approach. Results showed that many participants thought that portion sizes of various products have increased during the past decades and are larger than acceptable. The majority also indicated that value for money is important when purchasing and that large portion sizes offer more value for money than small portion sizes. Furthermore, many experienced difficulties with self-regulating the consumption of large portion sizes. Among the portion-size interventions that were discussed, participants had most positive attitudes toward a larger availability of portion sizes and pricing strategies, followed by serving-size labeling. In general, reducing package serving sizes as an intervention strategy to control food intake met resistance. The study concludes that consumers consider interventions consisting of a larger variety of available portion sizes, pricing strategies and serving-size labeling as most acceptable to implement.

  15. Ductile fracture toughness of heavy section pressure vessel steel plate. A specimen-size study of ASTM A 533 steels

    International Nuclear Information System (INIS)

    Williams, J.A.

    1979-09-01

    The ductile fracture toughness, J/sub Ic/, of ASTM A 533, Grade B, Class 1 and ASTM A 533, heat treated to simulate irradiation, was determined for 10- to 100-mm thick compact specimens. The toughness at maximum specimen load was also measured to determine the conservatism of J/sub Ic/. The toughness of ASTM A 533, Grade B, Class 1 steel was 349 kJ/m 2 and at the equivalent upper shelf temperature, the heat treated material exhibited 87 kJ/m 2 . The maximum load fracture toughness was found to be linearly proportional to specimen size, and only specimens which failed to meet ASTM size criteria exhibited maximum load toughness less than J/sub Ic/

  16. LIBOR troubles: Anomalous movements detection based on maximum entropy

    Science.gov (United States)

    Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria

    2016-05-01

    According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.

  17. Foraging Habitat Distributions Affect Territory Size and Shape in the Tuamotu Kingfisher

    Directory of Open Access Journals (Sweden)

    Dylan C. Kesler

    2012-01-01

    Full Text Available I studied factors influencing territory configuration in the Tuamotu kingfisher (Todiramphus gambieri. Radiotelemetry data were used to define territory boundaries, and I tested for effects on territory size and shape of landscape habitat composition and foraging patch configuration. Tuamotu kingfisher territories were larger in areas with reduced densities of coconut plantation foraging habitat, and territories were less circular in the study site that had a single slender patch of foraging habitat. Maximum territory length did not differ between study sites, however, which suggested that the size of Tuamotu kingfisher territories might be bounded by the combined influence of maximum travel distances and habitat configurations. Results also suggested that birds enlarge territories as they age. Together, results supported previous work indicating that territory configurations represent a balance between the costs of defending a territory and gains from territory ownership.

  18. Implications of late-in-life density-dependent growth for fishery size-at-entry leading to maximum sustainable yield

    DEFF Research Database (Denmark)

    van Gemert, Rob; Andersen, Ken Haste

    2018-01-01

    -in-life density-dependent growth: North Sea plaice (Pleuronectes platessa), Northeast Atlantic (NEA) mackerel (Scomber scombrus), and Baltic sprat (Sprattus sprattus balticus). For all stocks, the model predicts exploitation at MSY with a large size-at-entry into the fishery, indicating that late-in-life density...

  19. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  20. General herpetological collecting is size-based for five Pacific lizards

    Science.gov (United States)

    Rodda, Gordon H.; Yackel Adams, Amy A.; Campbell, Earl W.; Fritts, Thomas H.

    2015-01-01

    Accurate estimation of a species’ size distribution is a key component of characterizing its ecology, evolution, physiology, and demography. We compared the body size distributions of five Pacific lizards (Carlia ailanpalai, Emoia caeruleocauda, Gehyra mutilata, Hemidactylus frenatus, and Lepidodactylus lugubris) from general herpetological collecting (including visual surveys and glue boards) with those from complete censuses obtained by total removal. All species exhibited the same pattern: general herpetological collecting undersampled juveniles and oversampled mid-sized adults. The bias was greatest for the smallest juveniles and was not statistically evident for newly maturing and very large adults. All of the true size distributions of these continuously breeding species were skewed heavily toward juveniles, more so than the detections obtained from general collecting. A strongly skewed size distribution is not well characterized by the mean or maximum, though those are the statistics routinely reported for species’ sizes. We found body mass to be distributed more symmetrically than was snout–vent length, providing an additional rationale for collecting and reporting that size measure.

  1. Undersampling power-law size distributions: effect on the assessment of extreme natural hazards

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2014-01-01

    The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and by attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historic data.

  2. CytoMCS: A Multiple Maximum Common Subgraph Detection Tool for Cytoscape

    DEFF Research Database (Denmark)

    Larsen, Simon; Baumbach, Jan

    2017-01-01

    such analyses we have developed CytoMCS, a Cytoscape app for computing inexact solutions to the maximum common edge subgraph problem for two or more graphs. Our algorithm uses an iterative local search heuristic for computing conserved subgraphs, optimizing a squared edge conservation score that is able...... to detect not only fully conserved edges but also partially conserved edges. It can be applied to any set of directed or undirected, simple graphs loaded as networks into Cytoscape, e.g. protein-protein interaction networks or gene regulatory networks. CytoMCS is available as a Cytoscape app at http://apps.cytoscape.org/apps/cytomcs....

  3. Hypofractionated Whole-Breast Radiation Therapy: Does Breast Size Matter?

    International Nuclear Information System (INIS)

    Hannan, Raquibul; Thompson, Reid F.; Chen Yu; Bernstein, Karen; Kabarriti, Rafi; Skinner, William; Chen, Chin C.; Landau, Evan; Miller, Ekeni; Spierer, Marnee; Hong, Linda; Kalnicki, Shalom

    2012-01-01

    Purpose: To evaluate the effects of breast size on dose-volume histogram parameters and clinical toxicity in whole-breast hypofractionated radiation therapy using intensity modulated radiation therapy (IMRT). Materials and Methods: In this retrospective study, all patients undergoing breast-conserving therapy between 2005 and 2009 were screened, and qualifying consecutive patients were included in 1 of 2 cohorts: large-breasted patients (chest wall separation >25 cm or planning target volume [PTV] >1500 cm 3 ) (n=97) and small-breasted patients (chest wall separation 3 ) (n=32). All patients were treated prone or supine with hypofractionated IMRT to the whole breast (42.4 Gy in 16 fractions) followed by a boost dose (9.6 Gy in 4 fractions). Dosimetric and clinical toxicity data were collected and analyzed using the R statistical package (version 2.12). Results: The mean PTV V95 (percentage of volume receiving >= 95% of prescribed dose) was 90.18% and the mean V105 percentage of volume receiving >= 105% of prescribed dose was 3.55% with no dose greater than 107%. PTV dose was independent of breast size, whereas heart dose and maximum point dose to skin correlated with increasing breast size. Lung dose was markedly decreased in prone compared with supine treatments. Radiation Therapy Oncology Group grade 0, 1, and 2 skin toxicities were noted acutely in 6%, 69%, and 25% of patients, respectively, and at later follow-up (>3 months) in 43%, 57%, and 0% of patients, respectively. Large breast size contributed to increased acute grade 2 toxicity (28% vs 12%, P=.008). Conclusions: Adequate PTV coverage with acceptable hot spots and excellent sparing of organs at risk was achieved by use of IMRT regardless of treatment position and breast size. Although increasing breast size leads to increased heart dose and maximum skin dose, heart dose remained within our institutional constraints and the incidence of overall skin toxicity was comparable to that reported in the

  4. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    International Nuclear Information System (INIS)

    Ning, A; Dykes, K

    2014-01-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent

  5. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    Science.gov (United States)

    Ning, A.; Dykes, K.

    2014-06-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent.

  6. Calculations of safe collimator settings and β^{*} at the CERN Large Hadron Collider

    Directory of Open Access Journals (Sweden)

    R. Bruce

    2015-06-01

    Full Text Available The first run of the Large Hadron Collider (LHC at CERN was very successful and resulted in important physics discoveries. One way of increasing the luminosity in a collider, which gave a very significant contribution to the LHC performance in the first run and can be used even if the beam intensity cannot be increased, is to decrease the transverse beam size at the interaction points by reducing the optical function β^{*}. However, when doing so, the beam becomes larger in the final focusing system, which could expose its aperture to beam losses. For the LHC, which is designed to store beams with a total energy of 362 MJ, this is critical, since the loss of even a small fraction of the beam could cause a magnet quench or even damage. Therefore, the machine aperture has to be protected by the collimation system. The settings of the collimators constrain the maximum beam size that can be tolerated and therefore impose a lower limit on β^{*}. In this paper, we present calculations to determine safe collimator settings and the resulting limit on β^{*}, based on available aperture and operational stability of the machine. Our model was used to determine the LHC configurations in 2011 and 2012 and it was found that β^{*} could be decreased significantly compared to the conservative model used in 2010. The gain in luminosity resulting from the decreased margins between collimators was more than a factor 2, and a further contribution from the use of realistic aperture estimates based on measurements was almost as large. This has played an essential role in the rapid and successful accumulation of experimental data in the LHC.

  7. Calculations of safe collimator settings and β* at the CERN Large Hadron Collider

    Science.gov (United States)

    Bruce, R.; Assmann, R. W.; Redaelli, S.

    2015-06-01

    The first run of the Large Hadron Collider (LHC) at CERN was very successful and resulted in important physics discoveries. One way of increasing the luminosity in a collider, which gave a very significant contribution to the LHC performance in the first run and can be used even if the beam intensity cannot be increased, is to decrease the transverse beam size at the interaction points by reducing the optical function β*. However, when doing so, the beam becomes larger in the final focusing system, which could expose its aperture to beam losses. For the LHC, which is designed to store beams with a total energy of 362 MJ, this is critical, since the loss of even a small fraction of the beam could cause a magnet quench or even damage. Therefore, the machine aperture has to be protected by the collimation system. The settings of the collimators constrain the maximum beam size that can be tolerated and therefore impose a lower limit on β*. In this paper, we present calculations to determine safe collimator settings and the resulting limit on β*, based on available aperture and operational stability of the machine. Our model was used to determine the LHC configurations in 2011 and 2012 and it was found that β* could be decreased significantly compared to the conservative model used in 2010. The gain in luminosity resulting from the decreased margins between collimators was more than a factor 2, and a further contribution from the use of realistic aperture estimates based on measurements was almost as large. This has played an essential role in the rapid and successful accumulation of experimental data in the LHC.

  8. Population size and yield of Baffin Bay beluga (Delphinapterus leucas stocks

    Directory of Open Access Journals (Sweden)

    Stuart Innes

    2002-07-01

    Full Text Available A surplus production model within a Sampling, Importance Resampling (SIR Bayesian analysis was used to estimate stock sizes and yields of Baffin Bay belugas. The catch of belugas in West Greenland increased in 1968 and has remained well above sustainable rates. SIR analysis indicated a decline of about 50% between 1981 and 1994, with a credibility interval that included a previous estimate of 62%. The estimated stock sizes of belugas wintering off West Greenland in 1998 and 1999 were approximately 5,100 and 4,100 respectively and were not significantly different than an estimate based on aerial surveys combined for both years. Projected to 1999 this stock can sustain median landings of 109 whales with a total kill of about 155, based on posterior estimates of struck and lost plus under-reporting. The declining stock size index series did not provide sufficient information to estimate the potential maximum rate of population growth, the number of whales struck and lost, or the shape of the production curve with precision. Estimating these parameters requires an index time series with a marked step change in catch or a series with increasing stock sizes. The stock size estimate for the belugas wintering in the North Water in 1999 was approximately 14,800 but there is no information about the population biology of these whales. The estimated maximum sustainable yield (landed for the North Water stock was 317 belugas.

  9. Uniform deposition of size-selected clusters using Lissajous scanning

    International Nuclear Information System (INIS)

    Beniya, Atsushi; Watanabe, Yoshihide; Hirata, Hirohito

    2016-01-01

    Size-selected clusters can be deposited on the surface using size-selected cluster ion beams. However, because of the cross-sectional intensity distribution of the ion beam, it is difficult to define the coverage of the deposited clusters. The aggregation probability of the cluster depends on coverage, whereas cluster size on the surface depends on the position, despite the size-selected clusters are deposited. It is crucial, therefore, to deposit clusters uniformly on the surface. In this study, size-selected clusters were deposited uniformly on surfaces by scanning the cluster ions in the form of Lissajous pattern. Two sets of deflector electrodes set in orthogonal directions were placed in front of the sample surface. Triangular waves were applied to the electrodes with an irrational frequency ratio to ensure that the ion trajectory filled the sample surface. The advantages of this method are simplicity and low cost of setup compared with raster scanning method. The authors further investigated CO adsorption on size-selected Pt n (n = 7, 15, 20) clusters uniformly deposited on the Al 2 O 3 /NiAl(110) surface and demonstrated the importance of uniform deposition.

  10. Uniform deposition of size-selected clusters using Lissajous scanning

    Energy Technology Data Exchange (ETDEWEB)

    Beniya, Atsushi; Watanabe, Yoshihide, E-mail: e0827@mosk.tytlabs.co.jp [Toyota Central R& D Labs., Inc., 41-1 Yokomichi, Nagakute, Aichi 480-1192 (Japan); Hirata, Hirohito [Toyota Motor Corporation, 1200 Mishuku, Susono, Shizuoka 410-1193 (Japan)

    2016-05-15

    Size-selected clusters can be deposited on the surface using size-selected cluster ion beams. However, because of the cross-sectional intensity distribution of the ion beam, it is difficult to define the coverage of the deposited clusters. The aggregation probability of the cluster depends on coverage, whereas cluster size on the surface depends on the position, despite the size-selected clusters are deposited. It is crucial, therefore, to deposit clusters uniformly on the surface. In this study, size-selected clusters were deposited uniformly on surfaces by scanning the cluster ions in the form of Lissajous pattern. Two sets of deflector electrodes set in orthogonal directions were placed in front of the sample surface. Triangular waves were applied to the electrodes with an irrational frequency ratio to ensure that the ion trajectory filled the sample surface. The advantages of this method are simplicity and low cost of setup compared with raster scanning method. The authors further investigated CO adsorption on size-selected Pt{sub n} (n = 7, 15, 20) clusters uniformly deposited on the Al{sub 2}O{sub 3}/NiAl(110) surface and demonstrated the importance of uniform deposition.

  11. Effects of patch size on feeding associations in muriquis (Brachyteles arachnoides).

    Science.gov (United States)

    Strier, K B

    1989-01-01

    Data were collected on one group of muriquis, or woolly spider monkeys (Brachyteles arachnoides) during a 14-month study at Fazenda Montes Claros, Minas Gerais, Brazil to examine the effects of food patch size on muriqui feeding associations. Muriqui food patches were larger than expected from the availability of patch sizes in the forest; fruit patches were significantly larger than leaf patches. Feeding aggregate size, the maximum number of simultaneous occupants, and patch occupancy time were positively related to the size of fruit patches. However, a greater number of individuals fed at leaf sources than expected from the size of these patches. Adult females tended to feed alone in patches more often than males, whereas males tended to feed in single-sexed groups more often than females. Yet in neither case were these differences statistically significant.

  12. Simulation of the measure of the microparticle size distribution in two dimensions

    International Nuclear Information System (INIS)

    Lameiras, F.S.; Pinheiro, P.

    1987-01-01

    Different size distributions of plane figures were generated in a computer as a simply connected network. These size distributions were measured by the Saltykov method for two dimensions. The comparison between the generated and measured distributions showed that the Saltkov method tends to measure larger scattering than the real one and to move the maximum of the real distribution to larger diameters. These erros were determined by means of the ratio of the perimeter of the figures per unit area directly measured and the perimeter calculated from the size distribution obtained by using the SaltyKov method. (Author) [pt

  13. Energetic tradeoffs control the size distribution of aquatic mammals

    Science.gov (United States)

    Gearty, William; McClain, Craig R.; Payne, Jonathan L.

    2018-04-01

    Four extant lineages of mammals have invaded and diversified in the water: Sirenia, Cetacea, Pinnipedia, and Lutrinae. Most of these aquatic clades are larger bodied, on average, than their closest land-dwelling relatives, but the extent to which potential ecological, biomechanical, and physiological controls contributed to this pattern remains untested quantitatively. Here, we use previously published data on the body masses of 3,859 living and 2,999 fossil mammal species to examine the evolutionary trajectories of body size in aquatic mammals through both comparative phylogenetic analysis and examination of the fossil record. Both methods indicate that the evolution of an aquatic lifestyle is driving three of the four extant aquatic mammal clades toward a size attractor at ˜500 kg. The existence of this body size attractor and the relatively rapid selection toward, and limited deviation from, this attractor rule out most hypothesized drivers of size increase. These three independent body size increases and a shared aquatic optimum size are consistent with control by differences in the scaling of energetic intake and cost functions with body size between the terrestrial and aquatic realms. Under this energetic model, thermoregulatory costs constrain minimum size, whereas limitations on feeding efficiency constrain maximum size. The optimum size occurs at an intermediate value where thermoregulatory costs are low but feeding efficiency remains high. Rather than being released from size pressures, water-dwelling mammals are driven and confined to larger body sizes by the strict energetic demands of the aquatic medium.

  14. Feature-size dependent selective edge enhancement of x-ray images

    International Nuclear Information System (INIS)

    Herman, S.

    1988-01-01

    Morphological filters are nonlinear signal transformations that operate on a picture directly in the space domain. Such filters are based on the theory of mathematical morphology previously formulated. The filt4er being presented here features a ''mask'' operator (called a ''structuring element'' in some of the literature) which is a function of the two spatial coordinates x and y. The two basic mathematical operations are called ''masked erosion'' and ''masked dilation''. In the case of masked erosion the mask is passed over the input image in a raster pattern. At each position of the mask, the pixel values under the mask are multiplied by the mask pixel values. Then the output pixel value, located at the center position of the mask,is set equal to the minimum of the product of the mask and input values. Similarity, for masked dilation, the output pixel value is the maximum of the product of the input and the mask pixel values. The two basic processes of dilation and erosion can be used to construct the next level of operations the ''positive sieve'' (also called ''opening'') and the ''negative sieve'' (''closing''). The positive sieve modifies the peaks in the image whereas the negative sieve works on image valleys. The positive sieve is implemented by passing the output of the masked erosion step through the masked dilation function. The negative sieve reverses this procedure, using a dilation followed by an erosion. Each such sifting operator is characterized by a ''hole size''. It will be shown that the choice of hole size will select the range of pixel detail sizes which are to be enhanced. The shape of the mask will govern the shape of the enhancement. Finally positive sifting is used to enhance positive-going (peak) features, whereas negative enhances the negative-going (valley) landmarks

  15. Maximum production rate optimization for sulphuric acid decomposition process in tubular plug-flow reactor

    International Nuclear Information System (INIS)

    Wang, Chao; Chen, Lingen; Xia, Shaojun; Sun, Fengrui

    2016-01-01

    A sulphuric acid decomposition process in a tubular plug-flow reactor with fixed inlet flow rate and completely controllable exterior wall temperature profile and reactants pressure profile is studied in this paper by using finite-time thermodynamics. The maximum production rate of the aimed product SO 2 and the optimal exterior wall temperature profile and reactants pressure profile are obtained by using nonlinear programming method. Then the optimal reactor with the maximum production rate is compared with the reference reactor with linear exterior wall temperature profile and the optimal reactor with minimum entropy generation rate. The result shows that the production rate of SO 2 of optimal reactor with the maximum production rate has an increase of more than 7%. The optimization of temperature profile has little influence on the production rate while the optimization of reactants pressure profile can significantly increase the production rate. The results obtained may provide some guidelines for the design of real tubular reactors. - Highlights: • Sulphuric acid decomposition process in tubular plug-flow reactor is studied. • Fixed inlet flow rate and controllable temperature and pressure profiles are set. • Maximum production rate of aimed product SO 2 is obtained. • Corresponding optimal temperature and pressure profiles are derived. • Production rate of SO 2 of optimal reactor increases by 7%.

  16. Allometries of Maximum Growth Rate versus Body Mass at Maximum Growth Indicate That Non-Avian Dinosaurs Had Growth Rates Typical of Fast Growing Ectothermic Sauropsids

    Science.gov (United States)

    Werner, Jan; Griebeler, Eva Maria

    2014-01-01

    We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case’s study (1978), which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles) to 20 (fishes) times (in comparison to mammals) or even 45 (reptiles) to 100 (fishes) times (in comparison to birds) lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule out either

  17. Allometries of maximum growth rate versus body mass at maximum growth indicate that non-avian dinosaurs had growth rates typical of fast growing ectothermic sauropsids.

    Science.gov (United States)

    Werner, Jan; Griebeler, Eva Maria

    2014-01-01

    We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles) to 20 (fishes) times (in comparison to mammals) or even 45 (reptiles) to 100 (fishes) times (in comparison to birds) lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule out either of

  18. Allometries of maximum growth rate versus body mass at maximum growth indicate that non-avian dinosaurs had growth rates typical of fast growing ectothermic sauropsids.

    Directory of Open Access Journals (Sweden)

    Jan Werner

    Full Text Available We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes strongly differed from Case's study (1978, which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles to 20 (fishes times (in comparison to mammals or even 45 (reptiles to 100 (fishes times (in comparison to birds lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule

  19. Radiation inactivation analysis of enzymes. Effect of free radical scavengers on apparent target sizes

    International Nuclear Information System (INIS)

    Eichler, D.C.; Solomonson, L.P.; Barber, M.J.; McCreery, M.J.; Ness, G.C.

    1987-01-01

    In most cases the apparent target size obtained by radiation inactivation analysis corresponds to the subunit size or to the size of a multimeric complex. In this report, we examined whether the larger than expected target sizes of some enzymes could be due to secondary effects of free radicals. To test this proposal we carried out radiation inactivation analysis on Escherichia coli DNA polymerase I, Torula yeast glucose-6-phosphate dehydrogenase, Chlorella vulgaris nitrate reductase, and chicken liver sulfite oxidase in the presence and absence of free radical scavengers (benzoic acid and mannitol). In the presence of free radical scavengers, inactivation curves are shifted toward higher radiation doses. Plots of scavenger concentration versus enzyme activity showed that the protective effect of benzoic acid reached a maximum at 25 mM then declined. Mannitol alone had little effect, but appeared to broaden the maximum protective range of benzoic acid relative to concentration. The apparent target size of the polymerase activity of DNA polymerase I in the presence of free radical scavengers was about 40% of that observed in the absence of these agents. This is considerably less than the minimum polypeptide size and may reflect the actual size of the polymerase functional domain. Similar effects, but of lesser magnitude, were observed for glucose-6-phosphate dehydrogenase, nitrate reductase, and sulfite oxidase. These results suggest that secondary damage due to free radicals generated in the local environment as a result of ionizing radiation can influence the apparent target size obtained by this method

  20. 78 FR 9845 - Minimum and Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for a Violation of...

    Science.gov (United States)

    2013-02-12

    ... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...

  1. Growth dynamics of the threatened Caribbean staghorn coral Acropora cervicornis: influence of host genotype, symbiont identity, colony size, and environmental setting.

    Science.gov (United States)

    Lirman, Diego; Schopmeyer, Stephanie; Galvan, Victor; Drury, Crawford; Baker, Andrew C; Baums, Iliana B

    2014-01-01

    The drastic decline in the abundance of Caribbean acroporid corals (Acropora cervicornis, A. palmata) has prompted the listing of this genus as threatened as well as the development of a regional propagation and restoration program. Using in situ underwater nurseries, we documented the influence of coral genotype and symbiont identity, colony size, and propagation method on the growth and branching patterns of staghorn corals in Florida and the Dominican Republic. Individual tracking of> 1700 nursery-grown staghorn fragments and colonies from 37 distinct genotypes (identified using microsatellites) in Florida and the Dominican Republic revealed a significant positive relationship between size and growth, but a decreasing rate of productivity with increasing size. Pruning vigor (enhanced growth after fragmentation) was documented even in colonies that lost 95% of their coral tissue/skeleton, indicating that high productivity can be maintained within nurseries by sequentially fragmenting corals. A significant effect of coral genotype was documented for corals grown in a common-garden setting, with fast-growing genotypes growing up to an order of magnitude faster than slow-growing genotypes. Algal-symbiont identity established using qPCR techniques showed that clade A (likely Symbiodinium A3) was the dominant symbiont type for all coral genotypes, except for one coral genotype in the DR and two in Florida that were dominated by clade C, with A- and C-dominated genotypes having similar growth rates. The threatened Caribbean staghorn coral is capable of extremely fast growth, with annual productivity rates exceeding 5 cm of new coral produced for every cm of existing coral. This species benefits from high fragment survivorship coupled by the pruning vigor experienced by the parent colonies after fragmentation. These life-history characteristics make A. cervicornis a successful candidate nursery species and provide optimism for the potential role that active propagation

  2. Surgical Instrument Sets for Special Operations Expeditionary Surgical Teams.

    Science.gov (United States)

    Hale, Diane F; Sexton, Justin C; Benavides, Linda C; Benavides, Jerry M; Lundy, Jonathan B

    The deployment of surgical assets has been driven by mission demands throughout years of military operations in Iraq and Afghanistan. The transition to the highly expeditious Golden Hour Offset Surgical Transport Team (GHOST- T) now offers highly mobile surgical assets in nontraditional operating rooms; the content of the surgical instrument sets has also transformed to accommodate this change. The 102nd Forward Surgical Team (FST) was attached to Special Operations assigned to southern Afghanistan from June 2015 to March 2016. The focus was to decrease overall size and weight of FST instrument sets without decreasing surgical capability of the GHOST-T. Each instrument set was evaluated and modified to include essential instruments to perform damage control surgery. The overall number of main instrument sets was decreased from eight to four; simplified augmentation sets have been added, which expand the capabilities of any main set. The overall size was decreased by 40% and overall weight decreased by 58%. The cardiothoracic, thoracotomy, and emergency thoracotomy trays were condensed to thoracic set. The orthopedic and amputation sets were replaced with an augmentation set of a prepackaged orthopedic external fixator set). An augmentation set to the major or minor basic sets, specifically for vascular injuries, was created. Through the reorganization of conventional FST surgical instrument sets to maintain damage control capabilities and mobility, the 102nd GHOST-T reduced surgical equipment volume and weight, providing a lesson learned for future surgical teams operating in austere environments. 2017.

  3. Simulations of Scatterometry Down to 22 nm Structure Sizes and Beyond with Special Emphasis on LER

    Science.gov (United States)

    Osten, W.; Ferreras Paz, V.; Frenner, K.; Schuster, T.; Bloess, H.

    2009-09-01

    In recent years, scatterometry has become one of the most commonly used methods for CD metrology. With decreasing structure size for future technology nodes, the search for optimized scatterometry measurement configurations gets more important to exploit maximum sensitivity. As widespread industrial scatterometry tools mainly still use a pre-set measurement configuration, there are still free parameters to improve sensitivity. Our current work uses a simulation based approach to predict and optimize sensitivity of future technology nodes. Since line edge roughness is getting important for such small structures, these imperfections of the periodic continuation cannot be neglected. Using fourier methods like e.g. rigorous coupled wave approach (RCWA) for diffraction calculus, nonperiodic features are hard to reach. We show that in this field certain types of fieldstitching methods show nice numerical behaviour and lead to useful results.

  4. Influence of region of interest size and ultrasound lesion size on the performance of 2D shear wave elastography (SWE) in solid breast masses.

    Science.gov (United States)

    Skerl, K; Vinnicombe, S; Giannotti, E; Thomson, K; Evans, A

    2015-12-01

    To evaluate the influence of the region of interest (ROI) size and lesion diameter on the diagnostic performance of 2D shear wave elastography (SWE) of solid breast lesions. A study group of 206 consecutive patients (age range 21-92 years) with 210 solid breast lesions (70 benign, 140 malignant) who underwent core biopsy or surgical excision was evaluated. Lesions were divided into small (diameter <15 mm, n=112) and large lesions (diameter ≥15 mm, n=98). An ROI with a diameter of 1, 2, and 3 mm was positioned over the stiffest part of the lesion. The maximum elasticity (Emax), mean elasticity (Emean) and standard deviation (SD) for each ROI size were compared to the pathological outcome. Statistical analysis was undertaken using the chi-square test and receiver operating characteristic (ROC) analysis. The ROI size used has a significant impact on the performance of Emean and SD but not on Emax. Youden's indices show a correlation with the ROI size and lesion size: generally, the benign/malignant threshold is lower with increasing ROI size but higher with increasing lesion size. No single SWE parameter has superior performance. Lesion size and ROI size influence diagnostic performance. Copyright © 2015. Published by Elsevier Ltd.

  5. On nuclei and blocking sets in Desarguesian spaces

    NARCIS (Netherlands)

    Ball, S.M.

    1999-01-01

    A generalisation is given to recent results concerning the possible number of nuclei to a set of points inPG(n, q). As an application of this we obtain new lower bounds on the size of at-fold blocking set ofAG(n, q) in the case (t, q)>1.

  6. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  7. Modelling Inter-Particle Forces and Resulting Agglomerate Sizes in Cement-Based Materials

    DEFF Research Database (Denmark)

    Kjeldsen, Ane Mette; Geiker, Mette Rica

    2005-01-01

    The theory of inter-particle forces versus external shear in cement-based materials is reviewed. On this basis, calculations on maximum agglomerate size present after the combined action of superplasticizers and shear are carried out. Qualitative experimental results indicate that external shear ...

  8. Tactical Production and Lot Size Planning with Lifetime Constraints

    DEFF Research Database (Denmark)

    Raiconi, Andrea; Pahl, Julia; Gentili, Monica

    2017-01-01

    In this work, we face a variant of the capacitated lot sizing problem. This is a classical problem addressing the issue of aggregating lot sizes for a finite number of discrete periodic demands that need to be satisfied, thus setting up production resources and eventually creating inventories...

  9. WMAXC: a weighted maximum clique method for identifying condition-specific sub-network.

    Directory of Open Access Journals (Sweden)

    Bayarbaatar Amgalan

    Full Text Available Sub-networks can expose complex patterns in an entire bio-molecular network by extracting interactions that depend on temporal or condition-specific contexts. When genes interact with each other during cellular processes, they may form differential co-expression patterns with other genes across different cell states. The identification of condition-specific sub-networks is of great importance in investigating how a living cell adapts to environmental changes. In this work, we propose the weighted MAXimum clique (WMAXC method to identify a condition-specific sub-network. WMAXC first proposes scoring functions that jointly measure condition-specific changes to both individual genes and gene-gene co-expressions. It then employs a weaker formula of a general maximum clique problem and relates the maximum scored clique of a weighted graph to the optimization of a quadratic objective function under sparsity constraints. We combine a continuous genetic algorithm and a projection procedure to obtain a single optimal sub-network that maximizes the objective function (scoring function over the standard simplex (sparsity constraints. We applied the WMAXC method to both simulated data and real data sets of ovarian and prostate cancer. Compared with previous methods, WMAXC selected a large fraction of cancer-related genes, which were enriched in cancer-related pathways. The results demonstrated that our method efficiently captured a subset of genes relevant under the investigated condition.

  10. Production of sintered alumina from powder; optimization of the sinterized parameters for the maximum mechanical resistence

    International Nuclear Information System (INIS)

    Rocha, J.C. da.

    1981-02-01

    Pure, sinterized alumina and the optimization of the parameters of sinterization in order to obtain the highest mechanical resistence are discussed. Test materials are sinterized from a fine powder of pure alumina (Al 2 O 3 ), α phase, at different temperatures and times, in air. The microstructures are analysed concerning porosity and grain size. Depending on the temperature or the time of sinterization, there is a maximum for the mechanical resistence. (A.R.H.) [pt

  11. Maximum Likelihood Method for Predicting Environmental Conditions from Assemblage Composition: The R Package bio.infer

    Directory of Open Access Journals (Sweden)

    Lester L. Yuan

    2007-06-01

    Full Text Available This paper provides a brief introduction to the R package bio.infer, a set of scripts that facilitates the use of maximum likelihood (ML methods for predicting environmental conditions from assemblage composition. Environmental conditions can often be inferred from only biological data, and these inferences are useful when other sources of data are unavailable. ML prediction methods are statistically rigorous and applicable to a broader set of problems than more commonly used weighted averaging techniques. However, ML methods require a substantially greater investment of time to program algorithms and to perform computations. This package is designed to reduce the effort required to apply ML prediction methods.

  12. KCUT, code to generate minimal cut sets for fault trees

    International Nuclear Information System (INIS)

    Han, Sang Hoon

    2008-01-01

    1 - Description of program or function: KCUT is a software to generate minimal cut sets for fault trees. 2 - Methods: Expand a fault tree into cut sets and delete non minimal cut sets. 3 - Restrictions on the complexity of the problem: Size and complexity of the fault tree

  13. A step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy and minimization of gate fee.

    Science.gov (United States)

    Kyriakis, Efstathios; Psomopoulos, Constantinos; Kokkotis, Panagiotis; Bourtsalas, Athanasios; Themelis, Nikolaos

    2017-06-23

    This study attempts the development of an algorithm in order to present a step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy, also considering the basic obstacle which is in many cases, the gate fee. Various parameters identified and evaluated in order to formulate the proposed decision making method in the form of an algorithm. The principle simulation input is the amount of municipal solid wastes (MSW) available for incineration and along with its net calorific value are the most important factors for the feasibility of the plant. Moreover, the research is focused both on the parameters that could increase the energy production and those that affect the R1 energy efficiency factor. Estimation of the final gate fee is achieved through the economic analysis of the entire project by investigating both expenses and revenues which are expected according to the selected site and outputs of the facility. In this point, a number of commonly revenue methods were included in the algorithm. The developed algorithm has been validated using three case studies in Greece-Athens, Thessaloniki, and Central Greece, where the cities of Larisa and Volos have been selected for the application of the proposed decision making tool. These case studies were selected based on a previous publication made by two of the authors, in which these areas where examined. Results reveal that the development of a «solid» methodological approach in selecting the site and the size of waste-to-energy (WtE) facility can be feasible. However, the maximization of the energy efficiency factor R1 requires high utilization factors while the minimization of the final gate fee requires high R1 and high metals recovery from the bottom ash as well as economic exploitation of recovered raw materials if any.

  14. The attention-weighted sample-size model of visual short-term memory

    DEFF Research Database (Denmark)

    Smith, Philip L.; Lilburn, Simon D.; Corbett, Elaine A.

    2016-01-01

    exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items...

  15. Nonlinear analysis of vehicle control actuations based on controlled invariant sets

    Directory of Open Access Journals (Sweden)

    Németh Balázs

    2016-03-01

    Full Text Available In the paper, an analysis method is applied to the lateral stabilization problem of vehicle systems. The aim is to find the largest state-space region in which the lateral stability of the vehicle can be guaranteed by the peak-bounded control input. In the analysis, the nonlinear polynomial sum-of-squares programming method is applied. A practical computation technique is developed to calculate the maximum controlled invariant set of the system. The method calculates the maximum controlled invariant sets of the steering and braking control systems at various velocities and road conditions. Illustration examples show that, depending on the environments, different vehicle dynamic regions can be reached and stabilized by these controllers. The results can be applied to the theoretical basis of their interventions into the vehicle control system.

  16. How Well Does Fracture Set Characterization Reduce Uncertainty in Capture Zone Size for Wells Situated in Sedimentary Bedrock Aquifers?

    Science.gov (United States)

    West, A. C.; Novakowski, K. S.

    2005-12-01

    beyond a threshold concentration within the specified time. Aquifers are simulated by drawing the random spacings and apertures from specified distributions. Predictions are made of capture zone size assuming various degrees of knowledge of these distributions, with the parameters of the horizontal fractures being estimated using simulated hydraulic tests and a maximum likelihood estimator. The uncertainty is evaluated by calculating the variance in the capture zone size estimated in multiple realizations. The results show that despite good strategies to estimate the parameters of the horizontal fractures the uncertainty in capture zone size is enormous, mostly due to the lack of available information on vertical fractures. Also, at realistic distances (less than ten kilometers) and using realistic transmissivity distributions for the horizontal fractures the uptake of solute from fractures into matrix cannot be relied upon to protect the production well from contamination.

  17. The size distributions of fragments ejected at a given velocity from impact craters

    Science.gov (United States)

    O'Keefe, John D.; Ahrens, Thomas J.

    1987-01-01

    The mass distribution of fragments that are ejected at a given velocity for impact craters is modeled to allow extrapolation of laboratory, field, and numerical results to large scale planetary events. The model is semi-empirical in nature and is derived from: (1) numerical calculations of cratering and the resultant mass versus ejection velocity, (2) observed ejecta blanket particle size distributions, (3) an empirical relationship between maximum ejecta fragment size and crater diameter, (4) measurements and theory of maximum ejecta size versus ejecta velocity, and (5) an assumption on the functional form for the distribution of fragments ejected at a given velocity. This model implies that for planetary impacts into competent rock, the distribution of fragments ejected at a given velocity is broad, e.g., 68 percent of the mass of the ejecta at a given velocity contains fragments having a mass less than 0.1 times a mass of the largest fragment moving at that velocity. The broad distribution suggests that in impact processes, additional comminution of ejecta occurs after the upward initial shock has passed in the process of the ejecta velocity vector rotating from an initially downward orientation. This additional comminution produces the broader size distribution in impact ejecta as compared to that obtained in simple brittle failure experiments.

  18. Halo-independence with quantified maximum entropy at DAMA/LIBRA

    Energy Technology Data Exchange (ETDEWEB)

    Fowlie, Andrew, E-mail: andrew.j.fowlie@googlemail.com [ARC Centre of Excellence for Particle Physics at the Tera-scale, Monash University, Melbourne, Victoria 3800 (Australia)

    2017-10-01

    Using the DAMA/LIBRA anomaly as an example, we formalise the notion of halo-independence in the context of Bayesian statistics and quantified maximum entropy. We consider an infinite set of possible profiles, weighted by an entropic prior and constrained by a likelihood describing noisy measurements of modulated moments by DAMA/LIBRA. Assuming an isotropic dark matter (DM) profile in the galactic rest frame, we find the most plausible DM profiles and predictions for unmodulated signal rates at DAMA/LIBRA. The entropic prior contains an a priori unknown regularisation factor, β, that describes the strength of our conviction that the profile is approximately Maxwellian. By varying β, we smoothly interpolate between a halo-independent and a halo-dependent analysis, thus exploring the impact of prior information about the DM profile.

  19. Comparison of polyp size and volume at CT colonography: implications for follow-up CT colonography.

    Science.gov (United States)

    Bethea, Emily; Nwawka, Ogonna K; Dachman, Abraham H

    2009-12-01

    The purpose of this study was to evaluate the reliability of polyp measurements at CT colonography and the factors that affect the measurements. Fifty colonoscopically proven cases of polyps 6 mm in diameter or larger were analyzed by two observers who measured each polyp in supine and prone views. Manual measurements of 2D volume by summation of areas, 2D maximum diameter, and 3D maximum diameter and automated measurements of 3D maximum diameter and volume were recorded for each observer and were repeated for one of the observers. Intraobserver and interobserver agreement was calculated. Analysis was performed to determine the measurement parameter that correlated most with summation-of-areas volume. Supine and prone measurements as a surrogate for tracking change in polyp size over time were analyzed to determine the measurement parameter with the least variation. Maximum diameter measured manually on 3D images had the highest correlation with summation-of-areas volume. Manual summation-of-areas volume was found to have the least variation between supine and prone measurements. Linear polyp measurement in the 3D endoluminal view appears to be the most reliable parameter for use in the decision to excise a polyp according to current guidelines. In our study, manual calculation of volume with summation of areas was found to be the most reliable measurement parameter for observing polyp growth over serial examinations. High reliability of polyp measurements is essential for adequate assessment of change in polyp size over serial examinations because many patients with intermediate-size polyps are expected to choose surveillance.

  20. Genetic Analysis of Daily Maximum Milking Speed by a Random Walk Model in Dairy Cows

    DEFF Research Database (Denmark)

    Karacaören, Burak; Janss, Luc; Kadarmideen, Haja

    Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models of ...... filter applications: random walk model could give online prediction of breeding values. Hence without waiting for whole lactation records, genetic evaluation could be made when the daily or monthly data is available......Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models...... of maximum milking speed. Wood curve did not provide a good fit to the data set. Quadratic random regressions gave better predictions compared with the random walk model. However random walk model does not need to be evaluated for different orders of regression coefficients. In addition with the Kalman...

  1. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  2. Determination of the void nucleation rate from void size distributions

    International Nuclear Information System (INIS)

    Brailsford, A.D.

    1977-01-01

    A method of estimating the void nucleation rate from one void size distribution and from observation of the maximum void radius at prior times is proposed. Implicit in the method are the assumptions that both variations in the critical radius with dose and vacancy thermal emission processes during post-nucleation quasi-steady-state growth may be neglected. (Auth.)

  3. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  4. Size effects on cavitation instabilities

    DEFF Research Database (Denmark)

    Niordson, Christian Frithiof; Tvergaard, Viggo

    2006-01-01

    growth is here analyzed for such cases. A finite strain generalization of a higher order strain gradient plasticity theory is applied for a power-law hardening material, and the numerical analyses are carried out for an axisymmetric unit cell containing a spherical void. In the range of high stress...... triaxiality, where cavitation instabilities are predicted by conventional plasticity theory, such instabilities are also found for the nonlocal theory, but the effects of gradient hardening delay the onset of the instability. Furthermore, in some cases the cavitation stress reaches a maximum and then decays...... as the void grows to a size well above the characteristic material length....

  5. Realworld maximum power point tracking simulation of PV system based on Fuzzy Logic control

    Science.gov (United States)

    Othman, Ahmed M.; El-arini, Mahdi M. M.; Ghitas, Ahmed; Fathy, Ahmed

    2012-12-01

    In the recent years, the solar energy becomes one of the most important alternative sources of electric energy, so it is important to improve the efficiency and reliability of the photovoltaic (PV) systems. Maximum power point tracking (MPPT) plays an important role in photovoltaic power systems because it maximize the power output from a PV system for a given set of conditions, and therefore maximize their array efficiency. This paper presents a maximum power point tracker (MPPT) using Fuzzy Logic theory for a PV system. The work is focused on the well known Perturb and Observe (P&O) algorithm and is compared to a designed fuzzy logic controller (FLC). The simulation work dealing with MPPT controller; a DC/DC Ćuk converter feeding a load is achieved. The results showed that the proposed Fuzzy Logic MPPT in the PV system is valid.

  6. Search for the maximum efficiency of a ribbed-surfaces device, providing a tight seal

    International Nuclear Information System (INIS)

    Boutin, Jeanne.

    1977-04-01

    The purpose of this experiment was to determine the geometrical characteristics of ribbed surfaces used to equip devices in translation or slow rotation motion and having to form an acceptable seal between slightly viscous fluids. It systematically studies the pressure loss coefficient lambda in function of the different parameters setting the form of ribs and their relative position on the opposite sides. It shows that the passages with two ribbed surfaces lead to highly better results than those with only one, the maximum value of lambda, equal to 0.5, being obtained with the ratios: pitch/clearance = 5, depth of groove/clearance = 1,2, and with their teeth face to face on the two opposite ribbed surfaces. With certain shapes, alternate position of ribs can lead to the maximum of lambda yet lower than 0.5 [fr

  7. What if the Diatoms of the Deep Chlorophyll Maximum Can Ascend?

    Science.gov (United States)

    Villareal, T. A.

    2016-02-01

    Buoyancy regulation is an integral part of diatom ecology via its role in sinking rates and is fundamental to understanding their distribution and abundance. Numerous studies have documented the effects of size and nutrition on sinking rates. Many pelagic diatoms have low intrinsic sinking rates when healthy and nutrient-replete (deep chlorophyll maximum. The potential for ascending behavior adds an additional layer of complexity by allowing both active depth regulation similar to that observed in flagellated taxa and upward transport by some fraction of deep euphotic zone diatom blooms supported by nutrient injection. In this talk, I review the data documenting positive buoyancy in small diatoms, offer direct visual evidence of ascending behavior in common diatoms typical of both oceanic and coastal zones, and note the characteristics of sinking rate distributions within a single species. Buoyancy control leads to bidirectional movement at similar rates across a wide size spectrum of diatoms although the frequency of ascending behavior may be only a small portion of the individual species' abundance. While much remains to be learned, the paradigm of unidirectional downward movement by diatoms is both inaccurate and an oversimplification.

  8. Tight bounds on the size of neural networks for classification problems

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V. [Los Alamos National Lab., NM (United States); Pauw, T. de [Universite Catholique de Louvain, Louvain-la-Neuve (Belgium). Dept. de Mathematique

    1997-06-01

    This paper relies on the entropy of a data-set (i.e., number-of-bits) to prove tight bounds on the size of neural networks solving a classification problem. First, based on a sequence of geometrical steps, the authors constructively compute an upper bound of O(mn) on the number-of-bits for a given data-set - here m is the number of examples and n is the number of dimensions (i.e., R{sup n}). This result is used further in a nonconstructive way to bound the size of neural networks which correctly classify that data-set.

  9. The Hausdorff measure of chaotic sets of adjoint shift maps

    Energy Technology Data Exchange (ETDEWEB)

    Wang Huoyun [Department of Mathematics of Guangzhou University, Guangzhou 510006 (China)]. E-mail: wanghuoyun@sina.com; Song Wangan [Department of Computer, Huaibei Coal Industry Teacher College, Huaibei 235000 (China)

    2006-11-15

    In this paper, the size of chaotic sets of adjoint shift maps is estimated by Hausdorff measure. We prove that for any adjoint shift map there exists a finitely chaotic set with full Hausdorff measure.

  10. Rule reversal: Ecogeographical patterns of body size variation in the common treeshrew (Mammalia, Scandentia)

    Science.gov (United States)

    Sargis, Eric J.; Millien, Virginie; Woodman, Neal; Olson, Link E.

    2018-01-01

    There are a number of ecogeographical “rules” that describe patterns of geographical variation among organisms. The island rule predicts that populations of larger mammals on islands evolve smaller mean body size than their mainland counterparts, whereas smaller‐bodied mammals evolve larger size. Bergmann's rule predicts that populations of a species in colder climates (generally at higher latitudes) have larger mean body sizes than conspecifics in warmer climates (at lower latitudes). These two rules are rarely tested together and neither has been rigorously tested in treeshrews, a clade of small‐bodied mammals in their own order (Scandentia) broadly distributed in mainland Southeast Asia and on islands throughout much of the Sunda Shelf. The common treeshrew, Tupaia glis, is an excellent candidate for study and was used to test these two rules simultaneously for the first time in treeshrews. This species is distributed on the Malay Peninsula and several offshore islands east, west, and south of the mainland. Using craniodental dimensions as a proxy for body size, we investigated how island size, distance from the mainland, and maximum sea depth between the mainland and the islands relate to body size of 13 insular T. glis populations while also controlling for latitude and correlation among variables. We found a strong negative effect of latitude on body size in the common treeshrew, indicating the inverse of Bergmann's rule. We did not detect any overall difference in body size between the island and mainland populations. However, there was an effect of island area and maximum sea depth on body size among island populations. Although there is a strong latitudinal effect on body size, neither Bergmann's rule nor the island rule applies to the common treeshrew. The results of our analyses demonstrate the necessity of assessing multiple variables simultaneously in studies of ecogeographical rules.

  11. Size-change termination and transition invariants

    DEFF Research Database (Denmark)

    Heizmann, Matthias; Jones, Neil; Podelski, Andreas

    2010-01-01

    Two directions of recent work on program termination use the concepts of size-change termination resp. transition invariants. The difference in the setting has as consequence the inherent incomparability of the analysis and verification methods that result from this work. Yet, in order...

  12. Fleet Sizing of Automated Material Handling Using Simulation Approach

    Science.gov (United States)

    Wibisono, Radinal; Ai, The Jin; Ratna Yuniartha, Deny

    2018-03-01

    Automated material handling tends to be chosen rather than using human power in material handling activity for production floor in manufacturing company. One critical issue in implementing automated material handling is designing phase to ensure that material handling activity more efficient in term of cost spending. Fleet sizing become one of the topic in designing phase. In this research, simulation approach is being used to solve fleet sizing problem in flow shop production to ensure optimum situation. Optimum situation in this research means minimum flow time and maximum capacity in production floor. Simulation approach is being used because flow shop can be modelled into queuing network and inter-arrival time is not following exponential distribution. Therefore, contribution of this research is solving fleet sizing problem with multi objectives in flow shop production using simulation approach with ARENA Software

  13. Sizing of air cleaning systems for access to nuclear plant spaces

    International Nuclear Information System (INIS)

    Estreich, P.J.

    A mathematical basis is developed to provide the practicing engineer with a method for sizing air-cleaning systems for nuclear facilities. In particular, general formulas are provided to relate cleaning and contamination dynamics of an enclosure such that safe conditions are obtained when working crews enter. Included in these considerations is the sizing of an air-cleaning system to provide rapid decontamination of airborne radioactivity. Multiple-nuclide contamination sources, leak rate, direct radiation, contaminant mixing efficiency, filter efficiencies, air-cleaning-system operational modes, and criteria for maximum permissible concentrations are integrated into the procedure. (author)

  14. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  15. XRD characterisation of nanoparticle size and shape distributions

    International Nuclear Information System (INIS)

    Armstrong, N.; Kalceff, W.; Cline, J.P.; Bonevich, J.

    2004-01-01

    Full text: The form of XRD lines and the extent of their broadening provide useful structural information about the shape, size distribution, and modal characteristics of the nanoparticles comprising the specimen. Also, the defect content of the nanoparticles can be determined, including the type, dislocation density, and stacking faults/twinning. This information is convoluted together and can be grouped into 'size' and 'defect' broadening contributions. Modern X-ray diffraction analysis techniques have concentrated on quantifying the broadening arising from the size and defect contributions, while accounting for overlapping of profiles, instrumental broadening, background scattering and noise components. We report on a combined Bayesian/Maximum Entropy (MaxEnt) technique developed for use in the certification of a NIST Standard Reference Material (SRM) for size-broadened line profiles. The approach used was chosen because of its generality in removing instrumental broadening from the observed line profiles, and its ability to determine not only the average crystallite size, but also the distribution of sizes and the average shape of crystallites. Moverover, this Bayesian/MaxEnt technique is fully quantitative, in that it also determines uncertainties in the crystallite-size distribution and other parameters. Both experimental and numerical simulations of size broadened line-profiles modelled on a range of specimens with spherical and non-spherical morphologies are presented to demonstrate how this information can be retrieved from the line profile data. The sensitivity of the Bayesian/MaxEnt method to determining the size distribution using varying a priori information are emphasised and discussed

  16. No support for Heincke's law in hagfish (Myxinidae): lack of an association between body size and the depth of species occurrence.

    Science.gov (United States)

    Schumacher, E L; Owens, B D; Uyeno, T A; Clark, A J; Reece, J S

    2017-08-01

    This study tests for interspecific evidence of Heincke's law among hagfishes and advances the field of research on body size and depth of occurrence in fishes by including a phylogenetic correction and by examining depth in four ways: maximum depth, minimum depth, mean depth of recorded specimens and the average of maximum and minimum depths of occurrence. Results yield no evidence for Heincke's law in hagfishes, no phylogenetic signal for the depth at which species occur, but moderate to weak phylogenetic signal for body size, suggesting that phylogeny may play a role in determining body size in this group. © 2017 The Fisheries Society of the British Isles.

  17. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  18. Dust fluxes and iron fertilization in Holocene and Last Glacial Maximum climates

    Science.gov (United States)

    Lambert, Fabrice; Tagliabue, Alessandro; Shaffer, Gary; Lamy, Frank; Winckler, Gisela; Farias, Laura; Gallardo, Laura; De Pol-Holz, Ricardo

    2015-07-01

    Mineral dust aerosols play a major role in present and past climates. To date, we rely on climate models for estimates of dust fluxes to calculate the impact of airborne micronutrients on biogeochemical cycles. Here we provide a new global dust flux data set for Holocene and Last Glacial Maximum (LGM) conditions based on observational data. A comparison with dust flux simulations highlights regional differences between observations and models. By forcing a biogeochemical model with our new data set and using this model's results to guide a millennial-scale Earth System Model simulation, we calculate the impact of enhanced glacial oceanic iron deposition on the LGM-Holocene carbon cycle. On centennial timescales, the higher LGM dust deposition results in a weak reduction of pump. This is followed by a further ~10 ppm reduction over millennial timescales due to greater carbon burial and carbonate compensation.

  19. An efficient quantum scheme for Private Set Intersection

    Science.gov (United States)

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-01

    Private Set Intersection allows a client to privately compute set intersection with the collaboration of the server, which is one of the most fundamental and key problems within the multiparty collaborative computation of protecting the privacy of the parties. In this paper, we first present a cheat-sensitive quantum scheme for Private Set Intersection. Compared with classical schemes, our scheme has lower communication complexity, which is independent of the size of the server's set. Therefore, it is very suitable for big data services in Cloud or large-scale client-server networks.

  20. Applications of the principle of maximum entropy: from physics to ecology.

    Science.gov (United States)

    Banavar, Jayanth R; Maritan, Amos; Volkov, Igor

    2010-02-17

    There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori.