Class size versus class composition
DEFF Research Database (Denmark)
Jones, Sam
Raising schooling quality in low-income countries is a pressing challenge. Substantial research has considered the impact of cutting class sizes on skills acquisition. Considerably less attention has been given to the extent to which peer effects, which refer to class composition, also may affect...... bias from omitted variables, the preferred IV results indicate considerable negative effects due to larger class sizes and larger numbers of overage-for-grade peers. The latter, driven by the highly prevalent practices of grade repetition and academic redshirting, should be considered an important...
Do class size effects differ across grades?
DEFF Research Database (Denmark)
Nandrup, Anne Brink
size cap that creates exogenous variation in class sizes. Significant (albeit modest) negative effects of class size increases are found for children on primary school levels. The effects on math abilities are statistically different across primary and secondary school. Larger classes do not affect......This paper contributes to the class size literature by analyzing whether short-run class size effects are constant across grade levels in compulsory school. Results are based on administrative data on all pupils enroled in Danish public schools. Identification is based on a government-imposed class...
Do Class Size Effects Differ across Grades?
Nandrup, Anne Brink
2016-01-01
This paper contributes to the class size literature by analysing whether short-run class size effects are constant across grade levels in compulsory school. Results are based on administrative data on all pupils enrolled in Danish public schools. Identification is based on a government-imposed class size cap that creates exogenous variation in…
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.; Ito, N.
2013-01-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm
Directory of Open Access Journals (Sweden)
S. Radhika
2016-04-01
Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.
Reduced oxygen at high altitude limits maximum size.
Peck, L S; Chapelle, G
2003-11-07
The trend towards large size in marine animals with latitude, and the existence of giant marine species in polar regions have long been recognized, but remained enigmatic until a recent study showed it to be an effect of increased oxygen availability in sea water of a low temperature. The effect was apparent in data from 12 sites worldwide because of variations in water oxygen content controlled by differences in temperature and salinity. Another major physical factor affecting oxygen content in aquatic environments is reduced pressure at high altitude. Suitable data from high-altitude sites are very scarce. However, an exceptionally rich crustacean collection, which remains largely undescribed, was obtained by the British 1937 expedition from Lake Titicaca on the border between Peru and Bolivia in the Andes at an altitude of 3809 m. We show that in Lake Titicaca the maximum length of amphipods is 2-4 times smaller than other low-salinity sites (Caspian Sea and Lake Baikal).
Understanding the Role of Reservoir Size on Probable Maximum Precipitation
Woldemichael, A. T.; Hossain, F.
2011-12-01
This study addresses the question 'Does surface area of an artificial reservoir matter in the estimation of probable maximum precipitation (PMP) for an impounded basin?' The motivation of the study was based on the notion that the stationarity assumption that is implicit in the PMP for dam design can be undermined in the post-dam era due to an enhancement of extreme precipitation patterns by an artificial reservoir. In addition, the study lays the foundation for use of regional atmospheric models as one way to perform life cycle assessment for planned or existing dams to formulate best management practices. The American River Watershed (ARW) with the Folsom dam at the confluence of the American River was selected as the study region and the Dec-Jan 1996-97 storm event was selected for the study period. The numerical atmospheric model used for the study was the Regional Atmospheric Modeling System (RAMS). First, the numerical modeling system, RAMS, was calibrated and validated with selected station and spatially interpolated precipitation data. Best combinations of parameterization schemes in RAMS were accordingly selected. Second, to mimic the standard method of PMP estimation by moisture maximization technique, relative humidity terms in the model were raised to 100% from ground up to the 500mb level. The obtained model-based maximum 72-hr precipitation values were named extreme precipitation (EP) as a distinction from the PMPs obtained by the standard methods. Third, six hypothetical reservoir size scenarios ranging from no-dam (all-dry) to the reservoir submerging half of basin were established to test the influence of reservoir size variation on EP. For the case of the ARW, our study clearly demonstrated that the assumption of stationarity that is implicit the traditional estimation of PMP can be rendered invalid to a large part due to the very presence of the artificial reservoir. Cloud tracking procedures performed on the basin also give indication of the
Finite groups with three conjugacy class sizes of some elements
Indian Academy of Sciences (India)
Conjugacy class sizes; p-nilpotent groups; finite groups. 1. Introduction. All groups ... group G has exactly two conjugacy class sizes of elements of prime power order. .... [5] Huppert B, Character Theory of Finite Groups, de Gruyter Exp. Math.
The Structure of the Class of Maximum Tsallis–Havrda–Chavát Entropy Copulas
Directory of Open Access Journals (Sweden)
Jesús E. García
2016-07-01
Full Text Available A maximum entropy copula is the copula associated with the joint distribution, with prescribed marginal distributions on [ 0 , 1 ] , which maximizes the Tsallis–Havrda–Chavát entropy with q = 2 . We find necessary and sufficient conditions for each maximum entropy copula to be a copula in the class introduced in Rodríguez-Lallena and Úbeda-Flores (2004, and we also show that each copula in that class is a maximum entropy copula.
The False Promise of Class-Size Reduction
Chingos, Matthew M.
2011-01-01
Class-size reduction, or CSR, is enormously popular with parents, teachers, and the public in general. Many parents believe that their children will benefit from more individualized attention in a smaller class and many teachers find smaller classes easier to manage. The pupil-teacher ratio is an easy statistic for the public to monitor as a…
Teachers Class Size, Job Satisfaction and Morale in Cross River ...
African Journals Online (AJOL)
We studied staff class size, job satisfaction and morale in some secondary schools in Cross River State, Nigeria. The relevant variables of teacher class size and workload were used as independent variables while the dependent variables were students' academic performance, teacher satisfaction and morale. Out of the ...
Class Size and Academic Achievement in Introductory Political Science Courses
Towner, Terri L.
2016-01-01
Research on the influence of class size on student academic achievement is important for university instructors, administrators, and students. The article examines the influence of class size--a small section versus a large section--in introductory political science courses on student grades in two comparable semesters. It is expected that…
The Class Size Policy Debate. Working Paper No. 121.
Krueger, Alan B.; Hanushek, Eric A.
These papers examine research on the impact of class size on student achievement. After an "Introduction," (Richard Rothstein), Part 1, "Understanding the Magnitude and Effect of Class Size on Student Achievement" (Alan B. Krueger), presents a reanalysis of Hanushek's 1997 literature review, criticizing Hanushek's vote-counting…
Globalising the Class Size Debate: Myths and Realities
Directory of Open Access Journals (Sweden)
Kevin Watson
2013-10-01
Full Text Available Public opinion reflects a 'common sense' view that smaller classes improve student academic performance. This review reveals that the 'class size' effect of increased academic performance, although significant for disadvantaged students and students in the very early years of schooling, does not necessarily transfer to other student groups. Moreover, the literature indicates there are other more cost-effective variables that enhance student learning outcomes such as those associated with teacher quality. Internationally, large-scale interventions concluded that systematic class size reductions were more resource intensive requiring more personnel, training and infrastructure. From the large quantitative studies of the 1980s to the more qualitatively focused research in the last decade, there is a now an understanding that class size reductions function to provide opportunities for more student-focused pedagogies and that these pedagogies may be the real reason for improved student academic performance. Consequently, the impact of class size reductions on student performance can only be meaningfully assessed in conjunction with other factors, such as pedagogy.
Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.
Kim, Sehwi; Jung, Inkyung
2017-01-01
The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.
Dependence of US hurricane economic loss on maximum wind speed and storm size
International Nuclear Information System (INIS)
Zhai, Alice R; Jiang, Jonathan H
2014-01-01
Many empirical hurricane economic loss models consider only wind speed and neglect storm size. These models may be inadequate in accurately predicting the losses of super-sized storms, such as Hurricane Sandy in 2012. In this study, we examined the dependences of normalized US hurricane loss on both wind speed and storm size for 73 tropical cyclones that made landfall in the US from 1988 through 2012. A multi-variate least squares regression is used to construct a hurricane loss model using both wind speed and size as predictors. Using maximum wind speed and size together captures more variance of losses than using wind speed or size alone. It is found that normalized hurricane loss (L) approximately follows a power law relation with maximum wind speed (V max ) and size (R), L = 10 c V max a R b , with c determining an overall scaling factor and the exponents a and b generally ranging between 4–12 and 2–4 respectively. Both a and b tend to increase with stronger wind speed. Hurricane Sandy’s size was about three times of the average size of all hurricanes analyzed. Based on the bi-variate regression model that explains the most variance for hurricanes, Hurricane Sandy’s loss would be approximately 20 times smaller if its size were of the average size with maximum wind speed unchanged. It is important to revise conventional empirical hurricane loss models that are only dependent on maximum wind speed to include both maximum wind speed and size as predictors. (letters)
Class Size Reduction: Great Hopes, Great Challenges. Policy Brief.
WestEd, San Francisco, CA.
This policy brief examines the benefits and the challenges that accompany class-size reduction (CSR). It suggests that when designing CSR programs, states should carefully assess specific circumstances in their schools as they adopt or modify CSR efforts to avoid the unintended consequences that some programs have experienced. Some of the…
Class size, type of exam and student achievement
DEFF Research Database (Denmark)
Madsen, Erik
Education as a road to growth has been on the political agenda in recent years and promoted not least by the institutions of higher education. At the same time the universities have been squeezed for resources for a long period and the average class size has increased as a result. However......, the production technology for higher education is not well known and this study highlights the relation between class size and student achievement using a large dataset of 80.000 gradings from the Aarhus School of Business. The estimations show a large negative effect of larger classes on the grade level...... of students. The type of exam also has a large and significant effect on student achievements and oral exam, take-home exam and group exam reward the student with a significantly higher grade compared with an on-site written exam....
Class Size, Type of Exam and Student Achievement
DEFF Research Database (Denmark)
Madsen, Erik Strøjer
2011-01-01
Education as a road to growth has been on the political agenda in recent years and promoted not least by the institutions of higher education. At the same time the universities have been squeezed for resources for a long period and the average class size has increased as a result. However......, the production technology for higher education is not well known and this study highlights the relation between class size and student achievement using a large dataset of 80.000 gradings from the Aarhus School of Business. The estimations show a large negative effect of larger classes on the grade level...... of students. The type of exam also has a large and significant effect on student achievements and oral exam, take-home exam and group exam reward the student with a significantly higher grade compared with an on-site written exam....
The maximum sizes of large scale structures in alternative theories of gravity
Energy Technology Data Exchange (ETDEWEB)
Bhattacharya, Sourav [IUCAA, Pune University Campus, Post Bag 4, Ganeshkhind, Pune, 411 007 India (India); Dialektopoulos, Konstantinos F. [Dipartimento di Fisica, Università di Napoli ' Federico II' , Complesso Universitario di Monte S. Angelo, Edificio G, Via Cinthia, Napoli, I-80126 Italy (Italy); Romano, Antonio Enea [Instituto de Física, Universidad de Antioquia, Calle 70 No. 52–21, Medellín (Colombia); Skordis, Constantinos [Department of Physics, University of Cyprus, 1 Panepistimiou Street, Nicosia, 2109 Cyprus (Cyprus); Tomaras, Theodore N., E-mail: sbhatta@iitrpr.ac.in, E-mail: kdialekt@gmail.com, E-mail: aer@phys.ntu.edu.tw, E-mail: skordis@ucy.ac.cy, E-mail: tomaras@physics.uoc.gr [Institute of Theoretical and Computational Physics and Department of Physics, University of Crete, 70013 Heraklion (Greece)
2017-07-01
The maximum size of a cosmic structure is given by the maximum turnaround radius—the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulae for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulae agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the ΛCDM value, by a factor 1 + 1/3ω, where ω>> 1 is the Brans-Dicke parameter, implying consistency of the theory with current data.
Mclean, Elizabeth L; Forrester, Graham E
2018-04-01
We tested whether fishers' local ecological knowledge (LEK) of two fish life-history parameters, size at maturity (SAM) at maximum body size (MS), was comparable to scientific estimates (SEK) of the same parameters, and whether LEK influenced fishers' perceptions of sustainability. Local ecological knowledge was documented for 82 fishers from a small-scale fishery in Samaná Bay, Dominican Republic, whereas SEK was compiled from the scientific literature. Size at maturity estimates derived from LEK and SEK overlapped for most of the 15 commonly harvested species (10 of 15). In contrast, fishers' maximum size estimates were usually lower than (eight species), or overlapped with (five species) scientific estimates. Fishers' size-based estimates of catch composition indicate greater potential for overfishing than estimates based on SEK. Fishers' estimates of size at capture relative to size at maturity suggest routine inclusion of juveniles in the catch (9 of 15 species), and fishers' estimates suggest that harvested fish are substantially smaller than maximum body size for most species (11 of 15 species). Scientific estimates also suggest that harvested fish are generally smaller than maximum body size (13 of 15), but suggest that the catch is dominated by adults for most species (9 of 15 species), and that juveniles are present in the catch for fewer species (6 of 15). Most Samaná fishers characterized the current state of their fishery as poor (73%) and as having changed for the worse over the past 20 yr (60%). Fishers stated that concern about overfishing, catching small fish, and catching immature fish contributed to these perceptions, indicating a possible influence of catch-size composition on their perceptions. Future work should test this link more explicitly because we found no evidence that the minority of fishers with more positive perceptions of their fishery reported systematically different estimates of catch-size composition than those with the more
Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.
Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N
2014-01-01
Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
Directory of Open Access Journals (Sweden)
Yue Bin
Full Text Available Tree size distributions have long been of interest to ecologists and foresters because they reflect fundamental demographic processes. Previous studies have assumed that size distributions are often associated with population trends or with the degree of shade tolerance. We tested these associations for 31 tree species in a 20 ha plot in a Dinghushan south subtropical forest in China. These species varied widely in growth form and shade-tolerance. We used 2005 and 2010 census data from that plot. We found that 23 species had reversed J shaped size distributions, and eight species had unimodal size distributions in 2005. On average, modal species had lower recruitment rates than reversed J species, while showing no significant difference in mortality rates, per capita population growth rates or shade-tolerance. We compared the observed size distributions with the equilibrium distributions projected from observed size-dependent growth and mortality. We found that observed distributions generally had the same shape as predicted equilibrium distributions in both unimodal and reversed J species, but there were statistically significant, important quantitative differences between observed and projected equilibrium size distributions in most species, suggesting that these populations are not at equilibrium and that this forest is changing over time. Almost all modal species had U-shaped size-dependent mortality and/or growth functions, with turning points of both mortality and growth at intermediate size classes close to the peak in the size distribution. These results show that modal size distributions do not necessarily indicate either population decline or shade-intolerance. Instead, the modal species in our study were characterized by a life history strategy of relatively strong conservatism in an intermediate size class, leading to very low growth and mortality in that size class, and thus to a peak in the size distribution at intermediate sizes.
Class Size: Can School Districts Capitalize on the Benefits of Smaller Classes?
Hertling, Elizabeth; Leonard, Courtney; Lumsden, Linda; Smith, Stuart C.
2000-01-01
This report is intended to help policymakers understand the benefits of class-size reduction (CSR). It assesses the costs of CSR, considers some research-based alternatives, and explores strategies that will help educators realize the benefits of CSR when it is implemented. It examines how CSR enhances student achievement, such as when the…
A Fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation
International Nuclear Information System (INIS)
Li, Haisen S.; Romeijn, H. Edwin; Dempsey, James F.
2006-01-01
We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near mono-energetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the
Maximum size-density relationships for mixed-hardwood forest stands in New England
Dale S. Solomon; Lianjun Zhang
2000-01-01
Maximum size-density relationships were investigated for two mixed-hardwood ecological types (sugar maple-ash and beech-red maple) in New England. Plots meeting type criteria and undergoing self-thinning were selected for each habitat. Using reduced major axis regression, no differences were found between the two ecological types. Pure species plots (the species basal...
Determining the effect of grain size and maximum induction upon coercive field of electrical steels
Landgraf, Fernando José Gomes; da Silveira, João Ricardo Filipini; Rodrigues-Jr., Daniel
2011-10-01
Although theoretical models have already been proposed, experimental data is still lacking to quantify the influence of grain size upon coercivity of electrical steels. Some authors consider a linear inverse proportionality, while others suggest a square root inverse proportionality. Results also differ with regard to the slope of the reciprocal of grain size-coercive field relation for a given material. This paper discusses two aspects of the problem: the maximum induction used for determining coercive force and the possible effect of lurking variables such as the grain size distribution breadth and crystallographic texture. Electrical steel sheets containing 0.7% Si, 0.3% Al and 24 ppm C were cold-rolled and annealed in order to produce different grain sizes (ranging from 20 to 150 μm). Coercive field was measured along the rolling direction and found to depend linearly on reciprocal of grain size with a slope of approximately 0.9 (A/m)mm at 1.0 T induction. A general relation for coercive field as a function of grain size and maximum induction was established, yielding an average absolute error below 4%. Through measurement of B50 and image analysis of micrographs, the effects of crystallographic texture and grain size distribution breadth were qualitatively discussed.
National Aeronautics and Space Administration — PROBABILITY CALIBRATION BY THE MINIMUM AND MAXIMUM PROBABILITY SCORES IN ONE-CLASS BAYES LEARNING FOR ANOMALY DETECTION GUICHONG LI, NATHALIE JAPKOWICZ, IAN HOFFMAN,...
Directory of Open Access Journals (Sweden)
Adam Hartstone-Rose
2011-01-01
Full Text Available In a recent study, we quantified the scaling of ingested food size (Vb—the maximum size at which an animal consistently ingests food whole—and found that Vb scaled isometrically between species of captive strepsirrhines. The current study examines the relationship between Vb and body size within species with a focus on the frugivorous Varecia rubra and the folivorous Propithecus coquereli. We found no overlap in Vb between the species (all V. rubra ingested larger pieces of food relative to those eaten by P. coquereli, and least-squares regression of Vb and three different measures of body mass showed no scaling relationship within each species. We believe that this lack of relationship results from the relatively narrow intraspecific body size variation and seemingly patternless individual variation in Vb within species and take this study as further evidence that general scaling questions are best examined interspecifically rather than intraspecifically.
Mechanical limits to maximum weapon size in a giant rhinoceros beetle.
McCullough, Erin L
2014-07-07
The horns of giant rhinoceros beetles are a classic example of the elaborate morphologies that can result from sexual selection. Theory predicts that sexual traits will evolve to be increasingly exaggerated until survival costs balance the reproductive benefits of further trait elaboration. In Trypoxylus dichotomus, long horns confer a competitive advantage to males, yet previous studies have found that they do not incur survival costs. It is therefore unlikely that horn size is limited by the theoretical cost-benefit equilibrium. However, males sometimes fight vigorously enough to break their horns, so mechanical limits may set an upper bound on horn size. Here, I tested this mechanical limit hypothesis by measuring safety factors across the full range of horn sizes. Safety factors were calculated as the ratio between the force required to break a horn and the maximum force exerted on a horn during a typical fight. I found that safety factors decrease with increasing horn length, indicating that the risk of breakage is indeed highest for the longest horns. Structural failure of oversized horns may therefore oppose the continued exaggeration of horn length driven by male-male competition and set a mechanical limit on the maximum size of rhinoceros beetle horns. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Relationship between Exposureto Class Size Reductionand Student Achievementin California
Directory of Open Access Journals (Sweden)
Brian M. Stecher
2003-11-01
Full Text Available The CSR Research Consortium has been evaluating the implementation of the Class Size Reduction initiative in California since 1998. Initial reports documented the implementation of the program and its impact on the teacher workforce, the teaching of mathematics and Language Arts, parental involvement and student achievement. This study examines the relationship between student achievement and the number of years students have been exposed to CSR in grades K-3. The analysis was conducted at the grade level within schools using student achievement data collected in 1998-2001. Archival data collected by the state were used to establish CSR participation by grade for each school in the state. Most students had one of two patterns of exposure to CSR, which differed by only one year during grade K-3. The analysis found no strong association between achievement and exposure to CSR for these groups, after controlling for pre-existing differences in the groups.
Comparison of Machine Learning Techniques in Inferring Phytoplankton Size Classes
Directory of Open Access Journals (Sweden)
Shuibo Hu
2018-03-01
Full Text Available The size of phytoplankton not only influences its physiology, metabolic rates and marine food web, but also serves as an indicator of phytoplankton functional roles in ecological and biogeochemical processes. Therefore, some algorithms have been developed to infer the synoptic distribution of phytoplankton cell size, denoted as phytoplankton size classes (PSCs, in surface ocean waters, by the means of remotely sensed variables. This study, using the NASA bio-Optical Marine Algorithm Data set (NOMAD high performance liquid chromatography (HPLC database, and satellite match-ups, aimed to compare the effectiveness of modeling techniques, including partial least square (PLS, artificial neural networks (ANN, support vector machine (SVM and random forests (RF, and feature selection techniques, including genetic algorithm (GA, successive projection algorithm (SPA and recursive feature elimination based on support vector machine (SVM-RFE, for inferring PSCs from remote sensing data. Results showed that: (1 SVM-RFE worked better in selecting sensitive features; (2 RF performed better than PLS, ANN and SVM in calibrating PSCs retrieval models; (3 machine learning techniques produced better performance than the chlorophyll-a based three-component method; (4 sea surface temperature, wind stress, and spectral curvature derived from the remote sensing reflectance at 490, 510, and 555 nm were among the most sensitive features to PSCs; and (5 the combination of SVM-RFE feature selection techniques and random forests regression was recommended for inferring PSCs. This study demonstrated the effectiveness of machine learning techniques in selecting sensitive features and calibrating models for PSCs estimations with remote sensing.
Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz
2014-07-01
Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong
2016-01-01
Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.
2010-10-01
... vent, maximum trap size, and ghost panel requirements. 697.21 Section 697.21 Wildlife and Fisheries... identification and marking, escape vent, maximum trap size, and ghost panel requirements. (a) Gear identification... Administrator finds to be consistent with paragraph (c) of this section. (d) Ghost panel. (1) Lobster traps not...
Directory of Open Access Journals (Sweden)
Alex Robinson
2018-01-01
Full Text Available Over the past decade, a number of methods have been developed to estimate size-class primary production from either in situ phytoplankton pigment data or remotely-sensed data. In this context, the first objective of this study was to compare two methods of estimating size class specific (micro-, nano-, and pico-phytoplankton photosynthesis-irradiance (PE parameters from pigment data. The second objective was to analyse the relationship between environmental variables (temperature, nitrate and PAR and PE parameters in the different size-classes. A large dataset was used of simultaneous measurements of the PE parameters (n = 1,260 and phytoplankton pigment markers (n = 2,326, from 3 different institutes. There were no significant differences in mean PE parameters of the different size classes between the chemotaxonomic method of Uitz et al. (2008 and the pigment markers and carbon-to-Chl a ratios method of Sathyendranath et al. (2009. For both methods, mean maximum photosynthetic rates (PmB for micro-phytoplankton were significantly lower than those for pico-phytoplankton and nano-phytoplankton. The mean light limited slope (αB for nano-phytoplankton were significantly higher than for the other size taxa. For micro-phytoplankton dominated samples identified using the Sathyendranath et al. (2009 method, both PmB and αB exhibited a significant, positive linear relationship with temperature, whereas for pico-phytoplankton the correlation with temperature was negative. Nano-phytoplankton dominated samples showed a positive correlation between PmB and temperature, whereas for αB and the light saturation parameter (Ek the correlations were not significant. For the Uitz et al. (2008 method, only micro-phytoplankton PmB, pico-phytoplankton αB, nano- and pico-phytoplankton Ek exhibited significant relationships with temperature. The temperature ranges occupied by the size classes derived using these methods differed. The Uitz et al. (2008 method
DEFF Research Database (Denmark)
Mikosch, Thomas Valentin; Moser, Martin
2013-01-01
We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable....
Class Size and Student Diversity: Two Sides of the Same Coin. Teacher Voice
Froese-Germain, Bernie; Riel, Rick; McGahey, Bob
2012-01-01
Among Canadian teacher unions, discussions of class size are increasingly being informed by the importance of considering the diversity of student needs within the classroom (often referred to as class composition). For teachers, both class size and diversity matter. Teachers consistently adapt their teaching to address the individual needs of the…
Class Size Reduction in California: Summary of the 1998-99 Evaluation Findings.
Stecher, Brian M.; Bohrnstedt, George W.
This report discusses the results of the third year--1998-99--of California's Class Size Reduction (CSR) program. Assessments of the program show that CSR was almost fully implemented by 1998-99, with over 92 percent of students in K-3 in classes of 20 or fewer students. Those K-3 classes that had not been reduced in size were concentrated in…
Maximum leaf conductance driven by CO2 effects on stomatal size and density over geologic time.
Franks, Peter J; Beerling, David J
2009-06-23
Stomatal pores are microscopic structures on the epidermis of leaves formed by 2 specialized guard cells that control the exchange of water vapor and CO(2) between plants and the atmosphere. Stomatal size (S) and density (D) determine maximum leaf diffusive (stomatal) conductance of CO(2) (g(c(max))) to sites of assimilation. Although large variations in D observed in the fossil record have been correlated with atmospheric CO(2), the crucial significance of similarly large variations in S has been overlooked. Here, we use physical diffusion theory to explain why large changes in S necessarily accompanied the changes in D and atmospheric CO(2) over the last 400 million years. In particular, we show that high densities of small stomata are the only way to attain the highest g(cmax) values required to counter CO(2)"starvation" at low atmospheric CO(2) concentrations. This explains cycles of increasing D and decreasing S evident in the fossil history of stomata under the CO(2) impoverished atmospheres of the Permo-Carboniferous and Cenozoic glaciations. The pattern was reversed under rising atmospheric CO(2) regimes. Selection for small S was crucial for attaining high g(cmax) under falling atmospheric CO(2) and, therefore, may represent a mechanism linking CO(2) and the increasing gas-exchange capacity of land plants over geologic time.
Cooperative learning in industrial-sized biology classes.
Armstrong, Norris; Chang, Shu-Mei; Brickman, Marguerite
2007-01-01
This study examined the impact of cooperative learning activities on student achievement and attitudes in large-enrollment (>250) introductory biology classes. We found that students taught using a cooperative learning approach showed greater improvement in their knowledge of course material compared with students taught using a traditional lecture format. In addition, students viewed cooperative learning activities highly favorably. These findings suggest that encouraging students to work in small groups and improving feedback between the instructor and the students can help to improve student outcomes even in very large classes. These results should be viewed cautiously, however, until this experiment can be replicated with additional faculty. Strategies for potentially improving the impact of cooperative learning on student achievement in large courses are discussed.
Effects of Class Size and Attendance Policy on University Classroom Interaction in Taiwan
Bai, Yin; Chang, Te-Sheng
2016-01-01
Classroom interaction experience is one of the main parts of students' learning lives. However, surprisingly little research has investigated students' perceptions of classroom interaction with different attendance policies across different class sizes in the higher education system. To elucidate the effects of class size and attendance policy on…
What We Have Learned about Class Size Reduction in California. Capstone Report.
Bohrnstedt, George W., Ed.; Stecher, Brian M., Ed.
This final report on the California Class Size Reduction (CSR) initiative summarizes findings from three earlier reports dating back to 1997. Chapter 1 recaps the history of California's CSR initiative and includes a discussion of what state leaders' expectations were when CSR was passed. The chapter also describes research on class-size reduction…
The Class Size Question: A Study at Different Levels of Analysis. ACER Research Monograph No. 26.
Larkin, Anthony I.; Keeves, John P.
The purpose of this investigation was to examine the ways in which class size affected other facets of the educational environment of the classroom. The study focused on the commonly found positive relationship between class size and achievement. The most plausible explanation of the evidence seems to involve the effects of grouping more able…
Class Size and Sorting in Market Equilibrium: Theory and Evidence. NBER Working Paper No. 13303
Urquiola, Miguel; Verhoogen, Eric
2007-01-01
This paper examines how schools choose class size and how households sort in response to those choices. Focusing on the highly liberalized Chilean education market, we develop a model in which schools are heterogeneous in an underlying productivity parameter, class size is a component of school quality, households are heterogeneous in income and…
The Cost of Class Size Reduction: Advice for Policymakers. RAND Graduate School Dissertation.
Reichardt, Robert E.
This dissertation provides information to state-level policymakers that will help them avoid two implementation problems seen in the past in California's class-size-reduction (CSR) reform. The first problem was that flat, per student reimbursement did not adequately cover costs in districts with larger pre-CSR class-sizes or smaller schools. The…
Does Class-Size Reduction Close the Achievement Gap? Evidence from TIMSS 2011
Li, Wei; Konstantopoulos, Spyros
2017-01-01
Policies about reducing class size have been implemented in the US and Europe in the past decades. Only a few studies have discussed the effects of class size at different levels of student achievement, and their findings have been mixed. We employ quantile regression analysis, coupled with instrumental variables, to examine the causal effects of…
Dodrill, Michael J.; Yackulic, Charles B.; Kennedy, Theodore A.; Haye, John W
2016-01-01
The cold and clear water conditions present below many large dams create ideal conditions for the development of economically important salmonid fisheries. Many of these tailwater fisheries have experienced declines in the abundance and condition of large trout species, yet the causes of these declines remain uncertain. Here, we develop, assess, and apply a drift-foraging bioenergetics model to identify the factors limiting rainbow trout (Oncorhynchus mykiss) growth in a large tailwater. We explored the relative importance of temperature, prey quantity, and prey size by constructing scenarios where these variables, both singly and in combination, were altered. Predicted growth matched empirical mass-at-age estimates, particularly for younger ages, demonstrating that the model accurately describes how current temperature and prey conditions interact to determine rainbow trout growth. Modeling scenarios that artificially inflated prey size and abundance demonstrate that rainbow trout growth is limited by the scarcity of large prey items and overall prey availability. For example, shifting 10% of the prey biomass to the 13 mm (large) length class, without increasing overall prey biomass, increased lifetime maximum mass of rainbow trout by 88%. Additionally, warmer temperatures resulted in lower predicted growth at current and lower levels of prey availability; however, growth was similar across all temperatures at higher levels of prey availability. Climate change will likely alter flow and temperature regimes in large rivers with corresponding changes to invertebrate prey resources used by fish. Broader application of drift-foraging bioenergetics models to build a mechanistic understanding of how changes to habitat conditions and prey resources affect growth of salmonids will benefit management of tailwater fisheries.
An Approximate Proximal Bundle Method to Minimize a Class of Maximum Eigenvalue Functions
Directory of Open Access Journals (Sweden)
Wei Wang
2014-01-01
Full Text Available We present an approximate nonsmooth algorithm to solve a minimization problem, in which the objective function is the sum of a maximum eigenvalue function of matrices and a convex function. The essential idea to solve the optimization problem in this paper is similar to the thought of proximal bundle method, but the difference is that we choose approximate subgradient and function value to construct approximate cutting-plane model to solve the above mentioned problem. An important advantage of the approximate cutting-plane model for objective function is that it is more stable than cutting-plane model. In addition, the approximate proximal bundle method algorithm can be given. Furthermore, the sequences generated by the algorithm converge to the optimal solution of the original problem.
DEFF Research Database (Denmark)
Mikosch, Thomas Valentin; Rackauskas, Alfredas
2010-01-01
In this paper, we deal with the asymptotic distribution of the maximum increment of a random walk with a regularly varying jump size distribution. This problem is motivated by a long-standing problem on change point detection for epidemic alternatives. It turns out that the limit distribution...... of the maximum increment of the random walk is one of the classical extreme value distributions, the Fréchet distribution. We prove the results in the general framework of point processes and for jump sizes taking values in a separable Banach space...
Cho, Hyunkuk; Glewwe, Paul; Whitler, Melissa
2012-01-01
Many U.S. states and cities spend substantial funds to reduce class size, especially in elementary (primary) school. Estimating the impact of class size on learning is complicated, since children in small and large classes differ in many observed and unobserved ways. This paper uses a method of Hoxby (2000) to assess the impact of class size on…
Kewaza, Samuel; Welch, Myrtle I.
2013-01-01
Research on reading has established that reading is a pivotal discipline and early literacy development dictates later reading success. Therefore, the purpose of this study is to investigate challenges encountered with reading pedagogy, teaching materials, and teachers' attitudes towards teaching reading in crowded primary classes in Kampala,…
Directory of Open Access Journals (Sweden)
Kai Yan
2015-01-01
Full Text Available A predictive model for droplet size and velocity distributions of a pressure swirl atomizer has been proposed based on the maximum entropy formalism (MEF. The constraint conditions of the MEF model include the conservation laws of mass, momentum, and energy. The effects of liquid swirling strength, Weber number, gas-to-liquid axial velocity ratio and gas-to-liquid density ratio on the droplet size and velocity distributions of a pressure swirl atomizer are investigated. Results show that model based on maximum entropy formalism works well to predict droplet size and velocity distributions under different spray conditions. Liquid swirling strength, Weber number, gas-to-liquid axial velocity ratio and gas-to-liquid density ratio have different effects on droplet size and velocity distributions of a pressure swirl atomizer.
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains standard external morphometrics and internal oral cavity morphometrics from wild and captive reared loggerhead sea turtles in size classes...
Impact of the size of the class on pupils’ psychosocial well-being
DEFF Research Database (Denmark)
Nielsen, Bo Birk
Most research on class size effect focuses on pupils’ school achievement but few on pupils’ psychosocial well-being. On the other hand an increasing number of studies have showed that there is a link between pupils’ psychosocial well-being and their school achievement. 97 Danish typically...... developing 3rd grade pupils were tested. They were divided into 3 class size groups: Small (10 pupils), Medium (20 pupils), and Large (25 pupils). The average age (10 years) and the proportion of boys and girls (50%) and SES (medium) were similar in the 3 class size groups. Pupils’ psychosocial well-being...... and there was a significant link between lack of understanding of mixed emotions and lower level of self-concept and higher level of anger. These results of this research may contribute to a better understanding of the impact of the class size on pupils’ school achievement via the identification of risk factors...
Zhu, Tingbing; Zhang, Lihong; Zhang, Tanglin; Wang, Yaping; Hu, Wei; Olsen, Rolf Eric; Zhu, Zuoyan
2017-10-01
The present study preliminarily examined the differences in maximum handling size, prey size and species selectivity of growth hormone transgenic and non-transgenic common carp Cyprinus carpio when foraging on four gastropods species (Bellamya aeruginosa, Radix auricularia, Parafossarulus sinensis and Alocinma longicornis) under laboratory conditions. In the maximum handling size trial, five fish from each age group (1-year-old and 2-year-old) and each genotype (transgenic and non-transgenic) of common carp were individually allowed to feed on B. aeruginosa with wide shell height range. The results showed that maximum handling size increased linearly with fish length, and there was no significant difference in maximum handling size between the two genotypes. In the size selection trial, three pairs of 2-year-old transgenic and non-transgenic carp were individually allowed to feed on three size groups of B. aeruginosa. The results show that the two genotypes of C. carpio favored the small-sized group over the large-sized group. In the species selection trial, three pairs of 2-year-old transgenic and non-transgenic carp were individually allowed to feed on thin-shelled B. aeruginosa and thick-shelled R. auricularia, and five pairs of 2-year-old transgenic and non-transgenic carp were individually allowed to feed on two gastropods species (P. sinensis and A. longicornis) with similar size and shell strength. The results showed that both genotypes preferred thin-shelled Radix auricularia rather than thick-shelled B. aeruginosa, but there were no significant difference in selectivity between the two genotypes when fed on P. sinensis and A. longicornis. The present study indicates that transgenic and non-transgenic C. carpio show similar selectivity of predation on the size- and species-limited gastropods. While this information may be useful for assessing the environmental risk of transgenic carp, it does not necessarily demonstrate that transgenic common carp might
Morris, David E., Sr.; Scott, John
2014-01-01
The purpose of this pilot study is to examine the effects of the timing of classes and class size on student performance in introductory accounting courses. Factors affecting student success are important to all stakeholders in the academic community. Previous studies have shown mixed results regarding the effects of class size on student success…
Study of the variation of maximum beam size with quadrupole gradient in the FMIT drift tube linac
International Nuclear Information System (INIS)
Boicourt, G.P.; Jameson, R.A.
1981-01-01
The sensitivity of maximum beam size to input mismatch is studied as a function of quadrupole gradient in a short, high-current, drift-tube linac (DTL), for two presriptions: constant phase advance with constant filling factor; and constant strength with constant-length quads. Numerical study using PARMILA shows that the choice of quadrupole strength that minimizes the maximum transverse size of the matched beam through subsequent cells of the linac tends to be most sensitive to input mismatch. However, gradients exist nearby that result in almost-as-small beams over a suitably broad range of mismatch. The study was used to choose the initial gradient for the DTL portion of the Fusion Material Irradiation Test (FMIT) linac. The matching required across quad groups is also discussed
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Sen, Sedat
2018-01-01
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
Directory of Open Access Journals (Sweden)
SANDA ROȘCA
2014-06-01
Full Text Available Application of Soil Loss Scenarios Using the ROMSEM Model Depending on Maximum Land Use Pretability Classes. A Case Study. Practicing a modern agriculture that takes into consideration the favourability conditions and the natural resources of a territory represents one of the main national objectives. Due to the importance of the agricultural land, which prevails among the land use types from the Niraj river basin, as well as the pedological and geomorphological characteristics, different areas where soil erosion is above the accepted thresholds were identified by applying the ROMSEM model. In order to do so, a GIS database was used, regrouping quantitative information regarding soil type, land use, climate and hydrogeology, used as indicators in the model. Estimations for the potential soil erosion have been made on the entire basin as well as on its subbasins. The essential role played by the morphometrical characteristics has also been highlighted (concavity, convexity, slope length etc.. Taking into account the strong agricultural characteristic of the analysed territory, the scoring method was employed for the identification of crop favourability in the case of wheat, barley, corn, sunflower, sugar beet, potato, soy and pea-bean. The results have been used as input data for the C coefficient (crop/vegetation and management factor in the ROMSEM model that was applied for the present land use conditions, as well as for other four scenarios depicting the land use types with maximum favourability. The theoretical, modelled values of the soil erosion were obtained dependent on land use, while the other variables of the model were kept constant.
An Examination of the Relationship between Online Class Size and Instructor Performance
Directory of Open Access Journals (Sweden)
Chris Sorensen
2015-01-01
Full Text Available With no physical walls, the online classroom has the potential to house a large number of students. A concern by some is what happens to the quality of instruction in courses with high enrollments. The purpose of this research was to examine online class size and its relationship to, and potential influence on, an instructor’s performance. Results were mixed indicating that class size had a positive relationship with some the variables meant to measure online instructor performance and a negative relationship with others. Online class size was seen as having the most concerning relationship and potential influence on an instructor’s ability to provide quality feedback to students and for his/her expertise to be used consistently and effectively.
Directory of Open Access Journals (Sweden)
Yu Tian
2017-07-01
Full Text Available This study examined the relationship between externalizing behavior and academic engagement, and tested the possibility of class collective efficacy and class size moderating this relationship. Data were collected from 28 Chinese classrooms (N = 1034 students; grades 7, 8, and 9 with student reports. Hierarchical linear modeling was used to test all hypotheses and results revealed a negative relationship between externalizing behavior and academic engagement; class collective efficacy was also significantly related to academic engagement. Additionally, class collective efficacy and class size moderated the relationship between externalizing behavior and academic engagement: For students in a class with high collective efficacy or small size (≤30 students, the relationship between externalizing behavior and academic engagement was weaker than for those in a class with low collective efficacy or large size (≥43 students. Results are discussed considering self-regulatory mechanisms and social environment theory, with possible implications for teachers of students’ learning provided.
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
A Plan for the Evaluation of California's Class Size Reduction Initiative.
Kirst, Michael; Bomstedt, George; Stecher, Brian
In July 1996, California began its Class Size Reduction (CSR) Initiative. To gauge the effectiveness of this initiative, an analysis of its objectives and an overview of proposed strategies for evaluating CSR are presented here. An outline of the major challenges that stand between CSR and its mission are provided. These include logistical…
Class Size Reduction or Rapid Formative Assessment?: A Comparison of Cost-Effectiveness
Yeh, Stuart S.
2009-01-01
The cost-effectiveness of class size reduction (CSR) was compared with the cost-effectiveness of rapid formative assessment, a promising alternative for raising student achievement. Drawing upon existing meta-analyses of the effects of student-teacher ratio, evaluations of CSR in Tennessee, California, and Wisconsin, and RAND cost estimates, CSR…
Digital Repository Service at National Institute of Oceanography (India)
Malik, A.; Scheibe, A.; LokaBharathi, P.A.; Gleixner, G.
size classes by coupling high-performance liquid chromatography (HPLC) - size exclusion chromatography (SEC) to online isotope ratio mass spectrometry (IRMS). This represents a significant methodological contribution to DOC research. The interface...
Rankin, Brian D; Fox, Jeremy W; Barrón-Ortiz, Christian R; Chew, Amy E; Holroyd, Patricia A; Ludtke, Joshua A; Yang, Xingkai; Theodor, Jessica M
2015-08-07
Species selection, covariation of species' traits with their net diversification rates, is an important component of macroevolution. Most studies have relied on indirect evidence for its operation and have not quantified its strength relative to other macroevolutionary forces. We use an extension of the Price equation to quantify the mechanisms of body size macroevolution in mammals from the latest Palaeocene and earliest Eocene of the Bighorn and Clarks Fork Basins of Wyoming. Dwarfing of mammalian taxa across the Palaeocene/Eocene Thermal Maximum (PETM), an intense, brief warming event that occurred at approximately 56 Ma, has been suggested to reflect anagenetic change and the immigration of small bodied-mammals, but might also be attributable to species selection. Using previously reconstructed ancestor-descendant relationships, we partitioned change in mean mammalian body size into three distinct mechanisms: species selection operating on resident mammals, anagenetic change within resident mammalian lineages and change due to immigrants. The remarkable decrease in mean body size across the warming event occurred through anagenetic change and immigration. Species selection also was strong across the PETM but, intriguingly, favoured larger-bodied species, implying some unknown mechanism(s) by which warming events affect macroevolution. © 2015 The Author(s).
Lin, Junfang; Cao, Wenxi; Wang, Guifen; Hu, Shuibo
2014-06-15
Ocean-color remote sensing has been used as a tool to detect phytoplankton size classes (PSCs). In this study, a three-component model of PSC was reparameterized using seven years of pigment measurements acquired in the South China Sea (SCS). The model was then used to infer PSC in a cyclonic eddy which was observed west of Luzon Island from SeaWiFS chlorophyll-a (chla) and sea-surface height anomaly (SSHA) products. Enhanced productivity and a shift in the PSC were observed, which were likely due to upwelling of nutrient-rich water into the euphotic zone. The supply of nutrients promoted the growth of larger cells (micro- and nanoplankton), and the PSC shifted to greater sizes. However, the picoplankton were still important and contributed ∼48% to total chla concentration. In addition, PSC time series revealed a lag period of about three weeks between maximum eddy intensity and maximum chlorophyll, which may have been related to phytoplankton growth rate and duration of eddy intensity. Copyright © 2014 Elsevier Ltd. All rights reserved.
Prodanov, Dimiter; Feirabend, Hans K P
2008-10-03
Morphological classification of nerve fibers could help interpret the assessment of neural regeneration and the understanding of selectivity of nerve stimulation. Specific populations of myelinated nerve fibers can be investigated by retrograde tracing from a muscle followed by microscopic measurements of the labeled fibers at different anatomical levels. Gastrocnemius muscles of adult rats were injected with the retrograde tracer Fluoro-Gold. After a survival period of 3 days, cross-sections of spinal cords, ventral roots, sciatic, and tibial nerves were collected and imaged on a fluorescence microscope. Nerve fibers were classified using a variation-based criterion acting on the distribution of their equivalent diameters. The same criterion was used to classify the labeled axons using the size of the fluorescent marker. Measurements of the axons were paired to those of the entire fibers (axons+myelin sheaths) in order to establish the correspondence between so-established axonal and fiber classifications. It was found that nerve fibers in L6 ventral roots could be classified into four populations comprising two classes of Aalpha (denoted Aalpha1 and Aalpha2), Agamma, and an additional class of Agammaalpha fibers. Cut-off borders between Agamma and Agammaalpha fiber classes were estimated to be 5.00+/-0.09 microm (SEM); between Agammaalpha and Aalpha1 fiber classes to be 6.86+/-0.11 microm (SEM); and between Aalpha1 and Aalpha2 fiber classes to be 8.66+/-0.16 microm (SEM). Topographical maps of the nerve fibers that innervate the gastrocnemius muscles were constructed per fiber class for the spinal root L6. The major advantage of the presented approach consists of the combined indirect classification of nerve fiber types and the construction of topographical maps of so-identified fiber classes.
Barbaro, V; Grigioni, M; Daniele, C; D'Avenio, G; Boccanera, G
1997-11-01
The investigation of the flow field generated by cardiac valve prostheses is a necessary task to gain knowledge on the possible relationship between turbulence-derived stresses and the hemolytic and thrombogenic complications in patients after valve replacement. The study of turbulence flows downstream of cardiac prostheses, in literature, especially concerns large-sized prostheses with a variable flow regime from very low up to 6 L/min. The Food and Drug Administration draft guidance requires the study of the minimum prosthetic size at a high cardiac output to reach the maximum Reynolds number conditions. Within the framework of a national research project regarding the characterization of cardiovascular endoprostheses, an in-depth study of turbulence generated downstream of bileaflet cardiac valves is currently under way at the Laboratory of Biomedical Engineering of the Istituto Superiore di Sanita. Four models of 19 mm bileaflet valve prostheses were used: St Jude Medical HP, Edwards Tekna, Sorin Bicarbon, and CarboMedics. The prostheses were selected for the nominal Tissue Annulus Diameter as reported by manufacturers without any assessment of valve sizing method, and were mounted in aortic position. The aortic geometry was scaled for 19 mm prostheses using angiographic data. The turbulence-derived shear stresses were investigated very close to the valve (0.35 D0), using a bidimensional Laser Doppler anemometry system and applying the Principal Stress Analysis. Results concern typical turbulence quantities during a 50 ms window at peak flow in the systolic phase. Conclusions are drawn regarding the turbulence associated to valve design features, as well as the possible damage to blood constituents.
Allhusen, Virginia; Belsky, Jay; Booth-LaForce, Cathryn L.; Bradley, Robert; Brownwell, Celia A; Burchinal, Margaret; Campbell, Susan B.; Clarke-Stewart, K. Alison; Cox, Martha; Friedman, Sarah L.; Hirsh-Pasek, Kathryn; Houts, Renate M.; Huston, Aletha; Jaeger, Elizabeth; Johnson, Deborah J.; Kelly, Jean F.; Knoke, Bonnie; Marshall, Nancy; McCartney, Kathleen; Morrison, Frederick J.; O'Brien, Marion; Tresch Owen, Margaret; Payne, Chris; Phillips, Deborah; Pianta, Robert; Randolph, Suzanne M.; Robeson, Wendy W.; Spieker, Susan; Lowe Vandell, Deborah; Weinraub, Marsha
2004-01-01
This study evaluated the extent to which first-grade class size predicted child outcomes and observed classroom processes for 651 children (in separate classrooms). Analyses examined observed child-adult ratios and teacher-reported class sizes. Smaller classrooms showed higher quality instructional and emotional support, although children were…
Huang, Yu
Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.
Social Class and Group Size as Predictors of Behavior in Male Equus kiang
Directory of Open Access Journals (Sweden)
Prameek M. Kannan
2017-11-01
Full Text Available Ethograms provide a systematic approach to identify and quantify the repertoire of behaviors of an organism. This information may assist animal welfare in zoos, increase awareness of conservation needs, and help curb high-risk behaviors during human-wildlife conflict. Our primary objective was to utilize an equid ethogram to produce activity budgets for Equus kiang males, a social ungulate that is among the least-studied mammals worldwide, and unknown to the ethological literature. We recently reported the existence of three social classes of this species; Territorial males, Bachelor males and ‘Transient’ males. Therefore, our secondary objective was to compare activity budgets in each of these three groups. We found that kiang spent >70% of their time performing six behaviors: vigilance (34%, locomotion (24.2%, resting (14.2%, mixed foraging (12.5%, browsing (5.1%, and antagonism (1.1%. Over 2% of the total behavioral investment was spent on olfactory investigations (genital sniffing, sniffing proximity and flehmen. Eleven of the eighteen behaviors differed by class. Habitat selection differed strongly by each group, with Territorial males favoring mesic sites with greater vegetation abundance. Vigilance also differed according to habitat selection, but not group size. Animals in the xeric, least vegetation-rich area were far less vigilant than animals at more attractive sites. We found that the full repertoire of behaviors, and relative investments in each, differ according to social class. These findings are a reminder that researchers should make every effort to disambiguate social class among ungulates– and other taxa where behaviors are class-dependent.
DEFF Research Database (Denmark)
Nord, Martin
2004-01-01
This work proposes a quality of service differentiation algorithm, improving the service class granularity and isolation of our recently presented waveband plane based design. The design aims at overcoming potential hardware limitations and increasing the switch node dimensioning flexibility...... in core networks. Exploiting the wavelength dimension for contention resolution, using partially shared wavelength converter pools, avoids optical buffers and reduces wavelength converter count. These benefits are illustrated by numerical simulations, and are highlighted in a dimensioning study with three...
Leszczuk, Mikołaj; Dudek, Łukasz; Witkowski, Marcin
The VQiPS (Video Quality in Public Safety) Working Group, supported by the U.S. Department of Homeland Security, has been developing a user guide for public safety video applications. According to VQiPS, five parameters have particular importance influencing the ability to achieve a recognition task. They are: usage time-frame, discrimination level, target size, lighting level, and level of motion. These parameters form what are referred to as Generalized Use Classes (GUCs). The aim of our research was to develop algorithms that would automatically assist classification of input sequences into one of the GUCs. Target size and lighting level parameters were approached. The experiment described reveals the experts' ambiguity and hesitation during the manual target size determination process. However, the automatic methods developed for target size classification make it possible to determine GUC parameters with 70 % compliance to the end-users' opinion. Lighting levels of the entire sequence can be classified with an efficiency reaching 93 %. To make the algorithms available for use, a test application has been developed. It is able to process video files and display classification results, the user interface being very simple and requiring only minimal user interaction.
Directory of Open Access Journals (Sweden)
Lloyd Queen
2011-08-01
Full Text Available Requirements for describing coniferous forests are changing in response to wildfire concerns, bio-energy needs, and climate change interests. At the same time, technology advancements are transforming how forest properties can be measured. Terrestrial Laser Scanning (TLS is yielding promising results for measuring tree biomass parameters that, historically, have required costly destructive sampling and resulted in small sample sizes. Here we investigate whether TLS intensity data can be used to distinguish foliage and small branches (≤0.635 cm diameter; coincident with the one-hour timelag fuel size class from larger branchwood (>0.635 cm in Douglas-fir (Pseudotsuga menziesii branch specimens. We also consider the use of laser density for predicting biomass by size class. Measurements are addressed across multiple ranges and scan angles. Results show TLS capable of distinguishing fine fuels from branches at a threshold of one standard deviation above mean intensity. Additionally, the relationship between return density and biomass is linear by fuel type for fine fuels (r2 = 0.898; SE 22.7% and branchwood (r2 = 0.937; SE 28.9%, as well as for total mass (r2 = 0.940; SE 25.5%. Intensity decays predictably as scan distances increase; however, the range-intensity relationship is best described by an exponential model rather than 1/d2. Scan angle appears to have no systematic effect on fine fuel discrimination, while some differences are observed in density-mass relationships with changing angles due to shadowing.
Size class structure, growth rates, and orientation of the central Andean cushion Azorella compacta
Directory of Open Access Journals (Sweden)
Catherine Kleier
2015-03-01
Full Text Available Azorella compacta (llareta; Apiaceae forms dense, woody, cushions and characterizes the high elevation rocky slopes of the central Andean Altiplano. Field studies of an elevational gradient of A. compacta within Lauca National Park in northern Chile found a reverse J-shape distribution of size classes of individuals with abundant small plants at all elevations. A new elevational limit for A. compacta was established at 5,250 m. A series of cushions marked 14 years earlier showed either slight shrinkage or small degrees of growth up to 2.2 cm yr−1. Despite their irregularity in growth, cushions of A. compacta show a strong orientation, centered on a north-facing aspect and angle of about 20° from horizontal. This exposure to maximize solar irradiance closely matches previous observations of a population favoring north-facing slopes at a similar angle. Populations of A. compacta appear to be stable, or even expanding, with young plants abundant.
Wang, Qing; Wang, Dan; Wen, Xuefa; Yu, Guirui; He, Nianpeng; Wang, Rongfu
2015-01-01
The principle of enzyme kinetics suggests that the temperature sensitivity (Q10) of soil organic matter (SOM) decomposition is inversely related to organic carbon (C) quality, i.e., the C quality-temperature (CQT) hypothesis. We tested this hypothesis by performing laboratory incubation experiments with bulk soil, macroaggregates (MA, 250-2000 μm), microaggregates (MI, 53-250 μm), and mineral fractions (MF, temperature and aggregate size significantly affected on SOM decomposition, with notable interactive effects (Ptemperature in the following order: MA>MF>bulk soil >MI(P classes (P temperature is closely associated withsoil aggregation and highlights the complex responses of ecosystem C budgets to future warming scenarios.
Roundup Ready soybean gene concentrations in field soil aggregate size classes.
Levy-Booth, David J; Gulden, Robert H; Campbell, Rachel G; Powell, Jeff R; Klironomos, John N; Pauls, K Peter; Swanton, Clarence J; Trevors, Jack T; Dunfield, Kari E
2009-02-01
Roundup Ready (RR) soybeans containing recombinant Agrobacterium spp. CP4 5-enol-pyruvyl-shikimate-3-phosphate synthase (cp4 epsps) genes tolerant to the herbicide glyphosate are extensively grown worldwide. The concentration of recombinant DNA from RR soybeans in soil aggregates was studied due to the possibility of genetic transformation of soil bacteria. This study used real-time PCR to examine the concentration of cp4 epsps in four field soil aggregate size classes (>2000 microm, 2000-500 microm, 500-250 microm and 2000 mum fraction contained between 66.62% and 99.18% of total gene copies, although it only accounted for about 30.00% of the sampled soil. Aggregate formation may facilitate persistence of recombinant DNA.
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
Carey, Lawrence D.; Petersen, Walter A.
2011-01-01
The estimation of rain drop size distribution (DSD) parameters from polarimetric radar observations is accomplished by first establishing a relationship between differential reflectivity (Z(sub dr)) and the central tendency of the rain DSD such as the median volume diameter (D0). Since Z(sub dr) does not provide a direct measurement of DSD central tendency, the relationship is typically derived empirically from rain drop and radar scattering models (e.g., D0 = F[Z (sub dr)] ). Past studies have explored the general sensitivity of these models to temperature, radar wavelength, the drop shape vs. size relation, and DSD variability. Much progress has been made in recent years in measuring the drop shape and DSD variability using surface-based disdrometers, such as the 2D Video disdrometer (2DVD), and documenting their impact on polarimetric radar techniques. In addition to measuring drop shape, another advantage of the 2DVD over earlier impact type disdrometers is its ability to resolve drop diameters in excess of 5 mm. Despite this improvement, the sampling limitations of a disdrometer, including the 2DVD, make it very difficult to adequately measure the maximum drop diameter (D(sub max)) present in a typical radar resolution volume. As a result, D(sub max) must still be assumed in the drop and radar models from which D0 = F[Z(sub dr)] is derived. Since scattering resonance at C-band wavelengths begins to occur in drop diameters larger than about 5 mm, modeled C-band radar parameters, particularly Z(sub dr), can be sensitive to D(sub max) assumptions. In past C-band radar studies, a variety of D(sub max) assumptions have been made, including the actual disdrometer estimate of D(sub max) during a typical sampling period (e.g., 1-3 minutes), D(sub max) = C (where C is constant at values from 5 to 8 mm), and D(sub max) = M*D0 (where the constant multiple, M, is fixed at values ranging from 2.5 to 3.5). The overall objective of this NASA Global Precipitation Measurement
DEFF Research Database (Denmark)
Smit, Jeroen; Bernhammer, Lars O.; Navalkar, Sachin T.
2016-01-01
to fatigue damage have been identified. In these regions, the turbine energy output can be increased by deflecting the trailing edge (TE) flap in order to track the maximum power coefficient as a function of local, instantaneous speed ratios. For this purpose, the TE flap configuration for maximum power...... generation has been using blade element momentum theory. As a first step, the operation in non-uniform wind field conditions was analysed. Firstly, the deterministic fluctuation in local tip speed ratio due to wind shear was evaluated. The second effect is associated with time delays in adapting the rotor...
Katherine A. McCulloh; Daniel M. Johnson; Joshua Petitmermet; Brandon McNellis; Frederick C. Meinzer; Barbara Lachenbruch; Nathan Phillips
2015-01-01
The physiological mechanisms underlying the short maximum height of shrubs are not understood. One possible explanation is that differences in the hydraulic architecture of shrubs compared with co-occurring taller trees prevent the shrubs from growing taller. To explore this hypothesis, we examined various hydraulic parameters, including vessel lumen diameter,...
Directory of Open Access Journals (Sweden)
David Zyngier
2014-03-01
Full Text Available The question of class size continues to attract the attention of educational policymakers and researchers alike. Australian politicians and their advisers, policy makers and political commentators agree that much of Australia’s increased expenditure on education in the last 30 years has been ‘wasted’ on efforts to reduce class sizes. They conclude that funding is therefore not the problem in Australian education, arguing that extra funding has not led to improved academic results. Many scholars have found serious methodological issues with the existing reviews that make claims for the lack of educational and economic utility in reducing class sizes in schools. Significantly, the research supporting the current policy advice to both state and federal ministers of education is highly selective, and based on limited studies originating from the USA. This comprehensive review of 112 papers from 1979-2014 assesses whether these conclusions about the effect of smaller class sizes still hold. The review draws on a wider range of studies, starting with Australian research, but also includes similar education systems such as England, Canada, New Zealand and non-English speaking countries of Europe. The review assesses the different measures of class size and how they affect the results, and also whether other variables such as teaching methods are taken into account. Findings suggest that smaller class sizes in the first four years of school can have an important and lasting impact on student achievement, especially for children from culturally, linguistically and economically disenfranchised communities. This is particularly true when smaller classes are combined with appropriate teacher pedagogies suited to reduced student numbers. Suggested policy recommendations involve targeted funding for specific lessons and schools, combined with professional development of teachers. These measures may help to address the inequality of schooling and
Ravanbakhsh, Ali; Franchini, Sebastián
2012-10-01
In recent years, there has been continuing interest in the participation of university research groups in space technology studies by means of their own microsatellites. The involvement in such projects has some inherent challenges, such as limited budget and facilities. Also, due to the fact that the main objective of these projects is for educational purposes, usually there are uncertainties regarding their in orbit mission and scientific payloads at the early phases of the project. On the other hand, there are predetermined limitations for their mass and volume budgets owing to the fact that most of them are launched as an auxiliary payload in which the launch cost is reduced considerably. The satellite structure subsystem is the one which is most affected by the launcher constraints. This can affect different aspects, including dimensions, strength and frequency requirements. In this paper, the main focus is on developing a structural design sizing tool containing not only the primary structures properties as variables but also the system level variables such as payload mass budget and satellite total mass and dimensions. This approach enables the design team to obtain better insight into the design in an extended design envelope. The structural design sizing tool is based on analytical structural design formulas and appropriate assumptions including both static and dynamic models of the satellite. Finally, a Genetic Algorithm (GA) multiobjective optimization is applied to the design space. The result is a Pareto-optimal based on two objectives, minimum satellite total mass and maximum payload mass budget, which gives a useful insight to the design team at the early phases of the design.
Directory of Open Access Journals (Sweden)
Qing Wang
Full Text Available The principle of enzyme kinetics suggests that the temperature sensitivity (Q10 of soil organic matter (SOM decomposition is inversely related to organic carbon (C quality, i.e., the C quality-temperature (CQT hypothesis. We tested this hypothesis by performing laboratory incubation experiments with bulk soil, macroaggregates (MA, 250-2000 μm, microaggregates (MI, 53-250 μm, and mineral fractions (MF, MF>bulk soil >MI(P <0.05. The Q10 values were highest for MA, followed (in decreasing order by bulk soil, MF, and MI. Similarly, the activation energies (Ea for MA, bulk soil, MF, and MI were 48.47, 33.26, 27.01, and 23.18 KJ mol-1, respectively. The observed significant negative correlations between Q10 and C quality index in bulk soil and soil aggregates (P<0.05 suggested that the CQT hypothesis is applicable to soil aggregates. Cumulative C emission differed significantly among aggregate size classes (P <0.0001, with the largest values occurring in MA (1101 μg g-1, followed by MF (976 μg g-1 and MI (879 μg g-1. These findings suggest that feedback from SOM decomposition in response to changing temperature is closely associated withsoil aggregation and highlights the complex responses of ecosystem C budgets to future warming scenarios.
International Nuclear Information System (INIS)
Valizadeh, Solmaz; Shahbeig, Shahrzad; Mohseni, Sudeh; Azimi, Fateme; Bakhshandeh, Hooman
2015-01-01
In orthodontic science, diagnosis of facial skeletal type (class I, II, and III) is essential to make the correct treatment plan that is usually expensive and complicated. Sometimes results from analysis of lateral cephalometry radiographies are not enough to discriminate facial skeletal types. In this situation, knowledge about the relationship between the shape and size of the sella turcica and the type of facial skeletal class can help to make a more definitive decision for treatment plan. The present study was designed to investigate this relationship in patients referred to a dental school in Iran. In this descriptive-analytical study, cephalometric radiographies of 90 candidates for orthodontic treatment (44 females and 46 males) with an age range of 14 - 26 years and equal distribution in terms of class I, class II, and class III facial skeletal classification were selected. The shape, length, diameter, and depth of the sella turcica were determined on the radiographs. Linear dimensions were assessed by one-way analysis of variance while the correlation between the dimensions and age was investigated using Pearson’s correlation coefficient. Sella turcica had normal morphology in 24.4% of the patients while irregularity (notching) in the posterior part of the dorsum sella was observed in 15.6%, double contour of sellar floor in 5.6%, sella turcica bridge in 23.3%, oblique anterior wall in 20% and pyramidal shape of the dorsum sella in 11.1% of the subjects. In total, 46.7% of class I patients had a normal shape of sella turcica, 23.3% of class II patients had an oblique anterior wall and a pyramidal shape of the dorsum sella, and 43.3% of class III individuals had sella turcica bridge (the greatest values). Sella turcica length was significantly greater in class III patients compared to class II and class I (P < 0.0001). However, depth and diameter of sella turcica were similar in class I, class II, and class III patients. Furthermore, age was significantly
Harfitt, Gary James
2012-01-01
Class size research suggests that teachers do not vary their teaching strategies when moving from large to smaller classes. This study draws on interviews and classroom observations of three experienced English language teachers working with large and reduced-size classes in Hong Kong secondary schools. Findings from the study point to subtle…
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Size classes and associated liability... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS LIABILITY LIMITS FOR... privity and knowledge of the owner or operator, the following limits of liability are established for...
Al Kuwaiti, Ahmed; AlQuraan, Mahmoud; Subbarayalu, Arun Vijay
2016-01-01
Objective: This study aims to investigate the interaction between response rate and class size and its effects on students' evaluation of instructors and the courses offered at a higher education Institution in Saudi Arabia. Study Design: A retrospective study design was chosen. Methods: One thousand four hundred and forty four different courses…
Li, Yuting
2017-01-01
To explore whether the dispersion of sediments in the North Atlantic can be related to modern and past Atlantic Meridional Overturning Circulation (AMOC) flow speed, particle size distributions (weight%, Sortable Silt mean grain size) and grain-size separated (0–4, 4–10, 10–20, 20–30, 30–40 and 40–63 µm) Sm-Nd isotopes and trace element concentrations are measured on 12 cores along the flow-path of Western Boundary Undercurrent and in the central North Atlantic since the Last glacial Maximum ...
Directory of Open Access Journals (Sweden)
Mandy Kirkham-King
2017-12-01
Full Text Available Optimizing physical activity during physical education is necessary for children to achieve daily physical activity recommendations. The purpose of this study was to examine the relationship among various contextual factors with accelerometer measured physical activity during elementary physical education. Data were collected during 2015–2016 from 281 students (1st–5th grade, 137 males, 144 females from a private school located in a metropolitan area of Utah in the U.S. Students wore accelerometers for 12 consecutive weeks at an accelerometer wear frequency of 3days per week during physical education. A multi-level general linear mixed effects model was employed to examine the relationship among various physical education contextual factors and percent of wear time in moderate-to-vigorous physical activity (%MVPA, accounting for clustering of observations within students and the clustering of students within classrooms. Explored contextual factors included grade level, lesson context, sex, and class size. Main effects and interactions among the factors were explored in the multi-level models. A two-way interaction of lesson context and class size on %MVPA was shown to be statistically significant. The greatest differences were found to be between fitness lessons using small class sizes compared to motor skill lessons using larger class sizes (β=14.8%, 95% C.I. 5.7%–23.9% p<0.001. Lessons that included a focus on fitness activities with class sizes that were <25 students associated with significantly higher %MVPA during elementary physical education. Keywords: Exercise, Physical education and training, Adolescents
Bowne, Jocelyn Bonnes; Magnuson, Katherine A.; Schindler, Holly S.; Duncan, Greg J.; Yoshikawa, Hirokazu
2017-01-01
This study uses data from a comprehensive database of U.S. early childhood education program evaluations published between 1960 and 2007 to evaluate the relationship between class size, child-teacher ratio, and program effect sizes for cognitive, achievement, and socioemotional outcomes. Both class size and child-teacher ratio showed nonlinear…
Directory of Open Access Journals (Sweden)
Willi Pabst
2017-03-01
Full Text Available A generalized formulation of transformation matrices is given for the reconstruction of sphere diameter distributions from their section circle diameter distributions. This generalized formulation is based on a weight shift parameter that can be adjusted from 0 to 1. It includes the well-known Saltykov and Cruz-Orive transformations as special cases (for parameter values of 0 and 0.5, respectively. The physical meaning of this generalization is explained (showing, among others, that the Woodhead transformation should be bounded by the Saltykov transformation on the one side and by our transformation from the other and its numerical performance is investigated. In particular, it is shown that our generalized transformation is numerically highly unstable, i.e. introduces numerical artefacts (oscillations or even unphysical negative sphere frequencies into the reconstruction, and can lead to completely wrong results when a critical value of the parameter (usually in the range 0.7-0.9, depending on the type of distribution is exceeded. It is shown that this numerical instability is an intrinsic feature of these transformations that depends not only on the weight shift parameter value and is affected both by the type and the position of the distribution. It occurs in a natural way also for the Cruz-Orive and other transformations with finite weight shift parameter values and is not just caused by inadequate input data (e.g. as a consequence of an insufficient number of objects counted, as commonly assumed. Finally it is shown that an even more general class of transformation matrices can be defined that includes, in addition to the aformentioned transformations, also the Wicksell transformation.
Directory of Open Access Journals (Sweden)
Shuibo Hu
2018-03-01
Full Text Available Ocean colour remote sensing is used as a tool to detect phytoplankton size classes (PSCs. In this study, the Medium Resolution Imaging Spectrometer (MERIS, Moderate Resolution Imaging Spectroradiometer (MODIS, and Sea-viewing Wide Field-of-view Sensor (SeaWiFS phytoplankton size classes (PSCs products were compared with in-situ High Performance Liquid Chromatography (HPLC data for the South China Sea (SCS, collected from August 2006 to September 2011. Four algorithms were evaluated to determine their ability to detect three phytoplankton size classes. Chlorophyll-a (Chl-a and absorption spectra of phytoplankton (aph(λ were also measured to help understand PSC’s algorithm performance. Results show that the three abundance-based approaches performed better than the inherent optical property (IOP-based approach in the SCS. The size detection of microplankton and picoplankton was generally better than that of nanoplankton. A three-component model was recommended to produce maps of surface PSCs in the SCS. For the IOP-based approach, satellite retrievals of inherent optical properties and the PSCs algorithm both have impacts on inversion accuracy. However, for abundance-based approaches, the selection of the PSCs algorithm seems to be more critical, owing to low uncertainty in satellite Chl-a input data
Directory of Open Access Journals (Sweden)
LÚCIO FLÁVIO MACEDO MOTA
2015-01-01
Full Text Available This study aimed at evaluating the performance of Nelore cattle during growth classified for different classes of frame size regarding body weights and morphometric measures at different ages. Weights and morphometric measures Nelore bulls up to 1 year of age were monthly recorded. The characteristics evalu-ated were birth weight, 120, 205, 240 and 365 days of age, withers height and rump height, thoracic perimeter, distance between pin bones, distance between hip bones and chest width, depth of chest, space under sternal and hip length. Frame size scores classified as medium, large and extreme, were estimated using equations and tables according to Beef Improvement Federation (BIF. Data were subjected to analysis of variance and Tukey-Kramer test at 5% probability and analyses were performed by canonical variables and the grouping analyses of genotype by method of Tocher. The animals with larger class of frame size were heavier and morphometric measurements as well, when compared with animals classified for smaller class. The correlation between weight at different ages were higher. The weight correlates with body features positively, indicating that the weight gain of the animals increased their influence on the frame size. Cluster analysis resulted in three distinct genetic groups that have similar within the group and genetic divergence between them.
Directory of Open Access Journals (Sweden)
Jean Potvin
Full Text Available Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals, the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae, fin (Balaenoptera physalus, blue (Balaenoptera musculus and minke (Balaenoptera acutorostrata whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum VO2 at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half VO2|max. These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting
Directory of Open Access Journals (Sweden)
Martori, Ricardo
2005-05-01
grupos etarios de cada especie cambió temporalmente debido al reclutamiento y el período de reclutamiento varió en y entre especies. Durante el primer período el índice de diversidad mayor se registró en abril 1999 (5.46, durante el segundo período de estudio el mayor índice de diversidad se registró en enero 2000. Este estudio muestra la importancia de los estudios temporalmente extensos y enfatiza la importancia de comprender la variación temporal de la fenología, diversidad y patrones de actividad de los ensambles herpetológicos. From a conservationist perspective, knowledge of the abundance, diversity, and activity patterns of a herpetological assemblage is essential to understand community dynamics and habitat utilization. We proposed four null hypotheses regarding the dynamics of an assemblage of amphibians and reptiles from Argentina: 1 The capture frequency of each species studied is similar during the two years; 2 The capture frequency of each species is similar in every month of each period; 3 The activity of each species is similar to that of every other species and 4 The proportion of each size class for each species is similar throughout the year. During the study, nineteen species were collected: ten species of Amphibia belonging to four families, and nine species of Squamata, distributed among seven families. In relatively complex habitats, with dense vegetation and very irregular herpetological activity, the pitfall method is one of the few efficient ways to evaluate terrestrial animal activity. Pitfall traps are an effective method to perform herpetological inventories, but results must be reported with caution because traps capture some species more easily than others. The main results of this study were: Hypothesis 1 was rejected for all species except Mabuya dosivittata, which showed similar frequencies during both years. Hypothesis 2 was rejected, as all species showed significant seasonal differences. The most variable species were Bufo
Participation and Collaborative Learning in Large Class Sizes: Wiki, Can You Help Me?
de Arriba, Raúl
2017-01-01
Collaborative learning has a long tradition within higher education. However, its application in classes with a large number of students is complicated, since it is a teaching method that requires a high level of participation from the students and careful monitoring of the process by the educator. This article presents an experience in…
[Size of lower jaw as an early indicator of skeletal class III development].
Stojanović, Zdenka; Nikodijević, Angelina; Udovicić, Bozidar; Milić, Jasmina; Nikolić, Predrag
2008-08-01
Malocclusion of skeletal class III is a complex abnormality, with a characteristic sagital position of the lower jaw in front of the upper one. A higher level of prognatism of the lower jaw in relation to the upper one can be the consequence of its excessive length. The aim of this study was to find the differences in the length of the lower jaw in the children with skeletal class III and the children with normal sagital interjaw relation (skeletal class I) in the period of mixed dentition. After clinical and x-ray diagnostics, profile tele-x-rays of the head were analyzed in 60 examinees with mixed dentition, aged from 6 to 12 years. The examinees were divided into two groups: group 1--the children with skeletal class III and group 2--the children with skeletal class I. The length of the lower jaw, upper jaw and cranial base were measured. The proportional relations between the lengths measured within each group were established and the level of difference in the lengths measured and their proportions between the groups were estimated. No significant difference between the groups was found in the body length, ramus and the total length of the lower jaw. Proportional relation between the body length and the length of the lower jaw ramus and proportional relation between the forward cranial base and the lower jaw body were not significantly different. A significant difference was found in proportional relations of the total length of the lower jaw with the total lengths of cranial base and the upper jaw and proportional relation of the length of the lower and upper jaw body. Of all the analyzed parameters, the following were selected as the early indicators of the development of skeletal class III on the lower jaw: greater total length of the lower jaw, proportional to the total lengths of cranial base and theupper jaw, as well as greater length of the lower jaw body, proportional to the length of the upper jaw body.
composition and size class structure of tree species in ihang'ana
African Journals Online (AJOL)
nb
Previous plant biodiversity studies in this ecosystem concentrated on large-sized Forest ... assess tree species composition, structure and diversity in Ihang'ana FR (2982 ha), one of the ..... Dombeya rotundifolia. (Hochst) ... Ficus lutea. Vahl.
Zeng, Chen; Rosengard, Sarah Z.; Burt, William; Peña, M. Angelica; Nemcek, Nina; Zeng, Tao; Arrigo, Kevin R.; Tortell, Philippe D.
2018-06-01
We evaluate several algorithms for the estimation of phytoplankton size class (PSC) and functional type (PFT) biomass from ship-based optical measurements in the Subarctic Northeast Pacific Ocean. Using underway measurements of particulate absorption and backscatter in surface waters, we derived estimates of PSC/PFT based on chlorophyll-a concentrations (Chl-a), particulate absorption spectra and the wavelength dependence of particulate backscatter. Optically-derived [Chl-a] and phytoplankton absorption measurements were validated against discrete calibration samples, while the derived PSC/PFT estimates were validated using size-fractionated Chl-a measurements and HPLC analysis of diagnostic photosynthetic pigments (DPA). Our results showflo that PSC/PFT algorithms based on [Chl-a] and particulate absorption spectra performed significantly better than the backscatter slope approach. These two more successful algorithms yielded estimates of phytoplankton size classes that agreed well with HPLC-derived DPA estimates (RMSE = 12.9%, and 16.6%, respectively) across a range of hydrographic and productivity regimes. Moreover, the [Chl-a] algorithm produced PSC estimates that agreed well with size-fractionated [Chl-a] measurements, and estimates of the biomass of specific phytoplankton groups that were consistent with values derived from HPLC. Based on these results, we suggest that simple [Chl-a] measurements should be more fully exploited to improve the classification of phytoplankton assemblages in the Northeast Pacific Ocean.
Cultural constructions of "obesity": understanding body size, social class and gender in Morocco.
Batnitzky, Adina K
2011-01-01
This article presents data from an in-depth qualitative study of overweight and diabetic women in Morocco, a North African country experiencing a rapid increase in obesity according to national statistics. This case study explores the heterogeneous relationship among health, culture and religion in Morocco by highlighting the relationship between the intricacies of women's everyday lives and their body sizes. My findings suggest that although the Body Mass Index (BMI) of adult women has been documented to have increased in Morocco along with other macroeconomic changes (i.e., increases in urbanization, etc.), "obesity" has yet to be universally medicalized in the Moroccan context. As such women do not generally utilize a medicalized concept of obesity in reference to their larger body sizes. Rather, cultural constructions of "obesity" are understood through cultural understandings of a larger body size, religious beliefs about health and illness, and the nature of women's religious participation. This stands in contrast to dominant accounts about the region that promote an overall veneration of a larger body size for women. Copyright Â© 2010 Elsevier Ltd. All rights reserved.
The effects of fluvial transport on radionuclide concentrations on different particle size classes
International Nuclear Information System (INIS)
Dyer, F.J.; Olley, J.M.
1998-01-01
This paper reports on the effects of grain abrasion and disaggregation on the distribution of 137 Cs with respect to particle size and the effects this may have on the use of 137 Cs for determining the origin of recent sediment. Cs-137 is a product of above ground nuclear testing and has been deposited on the earth's surface by rainfall. On contact with soil, 137 Cs is strongly adsorbed by soil particles and there is a direct correlation between 137 Cs concentration and decreasing particle size. Rapid adsorption means that 137 Cs is preferentially concentrated in surface soils, and it's subsequent redistribution by physical processes rather than chemical has lead to 137 Cs being widely used to study soil erosion
Separating the Classes of Recursively Enumerable Languages Based on Machine Size
Czech Academy of Sciences Publication Activity Database
van Leeuwen, J.; Wiedermann, Jiří
2015-01-01
Roč. 26, č. 6 (2015), s. 677-695 ISSN 0129-0541 R&D Projects: GA ČR GAP202/10/1333 Grant - others:GA ČR(CZ) GA15-04960S Institutional support: RVO:67985807 Keywords : recursively enumerable languages * RE hierarchy * finite languages * machine size * descriptional complexity * Turing machines with advice Subject RIV: IN - Informatics, Computer Science Impact factor: 0.467, year: 2015
Zhang, Lin; Huttin, Olivier; Marie, Pierre-Yves; Felblinger, Jacques; Beaumont, Marine; Chillou, Christian DE; Girerd, Nicolas; Mandry, Damien
2016-11-01
To compare three widely used methods for myocardial infarct (MI) sizing on late gadolinium-enhanced (LGE) magnetic resonance (MR) images: manual delineation and two semiautomated techniques (full-width at half-maximum [FWHM] and n-standard deviation [SD]). 3T phase-sensitive inversion-recovery (PSIR) LGE images of 114 patients after an acute MI (2-4 days and 6 months) were analyzed by two independent observers to determine both total and core infarct sizes (TIS/CIS). Manual delineation served as the reference for determination of optimal thresholds for semiautomated methods after thresholding at multiple values. Reproducibility and accuracy were expressed as overall bias ± 95% limits of agreement. Mean infarct sizes by manual methods were 39.0%/24.4% for the acute MI group (TIS/CIS) and 29.7%/17.3% for the chronic MI group. The optimal thresholds (ie, providing the closest mean value to the manual method) were FWHM30% and 3SD for the TIS measurement and FWHM45% and 6SD for the CIS measurement (paired t-test; all P > 0.05). The best reproducibility was obtained using FWHM. For TIS measurement in the acute MI group, intra-/interobserver agreements, from Bland-Altman analysis, with FWHM30%, 3SD, and manual were -0.02 ± 7.74%/-0.74 ± 5.52%, 0.31 ± 9.78%/2.96 ± 16.62% and -2.12 ± 8.86%/0.18 ± 16.12, respectively; in the chronic MI group, the corresponding values were 0.23 ± 3.5%/-2.28 ± 15.06, -0.29 ± 10.46%/3.12 ± 13.06% and 1.68 ± 6.52%/-2.88 ± 9.62%, respectively. A similar trend for reproducibility was obtained for CIS measurement. However, semiautomated methods produced inconsistent results (variabilities of 24-46%) compared to manual delineation. The FWHM technique was the most reproducible method for infarct sizing both in acute and chronic MI. However, both FWHM and n-SD methods showed limited accuracy compared to manual delineation. J. Magn. Reson. Imaging 2016;44:1206-1217. © 2016 International Society for Magnetic Resonance in Medicine.
Alafate, Aierken; Shinya, Takayoshi; Okumura, Yoshihiro; Sato, Shuhei; Hiraki, Takao; Ishii, Hiroaki; Gobara, Hideo; Kato, Katsuya; Fujiwara, Toshiyoshi; Miyoshi, Shinichiro; Kaji, Mitsumasa; Kanazawa, Susumu
2013-01-01
We retrospectively evaluated the accumulation of fluorodeoxy glucose (FDG) in pulmonary malignancies without local recurrence during 2-year follow-up on positron emission tomography (PET)/computed tomography (CT) after radiofrequency ablation (RFA). Thirty tumors in 25 patients were studied (10 non-small cell lung cancers;20 pulmonary metastatic tumors). PET/CT was performed before RFA, 3 months after RFA, and 6 months after RFA. We assessed the FDG accumulation with the maximum standardized uptake value (SUVmax) compared with the diameters of the lesions. The SUVmax had a decreasing tendency in the first 6 months and, at 6 months post-ablation, FDG accumulation was less affected by inflammatory changes than at 3 months post-RFA. The diameter of the ablated lesion exceeded that of the initial tumor at 3 months post-RFA and shrank to pre-ablation dimensions by 6 months post-RFA. SUVmax was more reliable than the size measurements by CT in the first 6 months after RFA, and PET/CT at 6 months post-RFA may be more appropriate for the assessment of FDG accumulation than that at 3 months post-RFA.
Nicassio, P M
1977-12-01
A study was conducted to determine the way in which stereotypes of machismo and femininity are associated with family size and perceptions of family planning. A total of 144 adults, male and female, from a lower class and an upper middle class urban area in Colombia were asked to respond to photographs of Colombian families varying in size and state of completeness. The study illustrated the critical role of sex-role identity and sex-role organization as variables having an effect on fertility. The lower-class respondents described parents in the photographs as significantly more macho or feminine because of their children than the upper-middle-class subjects did. Future research should attempt to measure when this drive to sex-role identity is strongest, i.e., when men and women are most driven to reproduce in order to "prove" themselves. Both lower- and upper-middle-class male groups considered male dominance in marriage to be directly linked with family size. Perceptions of the use of family planning decreased linearly with family size for both social groups, although the lower-class females attributed more family planning to spouses of large families than upper-middle-class females. It is suggested that further research deal with the ways in which constructs of machismo and male dominance vary between the sexes and among socioeconomic groups and the ways in which they impact on fertility.
Sun, Deyong; Huan, Yu; Qiu, Zhongfeng; Hu, Chuanmin; Wang, Shengqiang; He, Yijun
2017-10-01
Phytoplankton size class (PSC), a measure of different phytoplankton functional and structural groups, is a key parameter to the understanding of many marine ecological and biogeochemical processes. In turbid waters where optical properties may be influenced by terrigenous discharge and nonphytoplankton water constituents, remote estimation of PSC is still a challenging task. Here based on measurements of phytoplankton diagnostic pigments, total chlorophyll a, and spectral reflectance in turbid waters of Bohai Sea and Yellow Sea during summer 2015, a customized model is developed and validated to estimate PSC in the two semienclosed seas. Five diagnostic pigments determined through high-performance liquid chromatography (HPLC) measurements are first used to produce weighting factors to model phytoplankton biomass (using total chlorophyll a as a surrogate) with relatively high accuracies. Then, a common method used to calculate contributions of microphytoplankton, nanophytoplankton, and picophytoplankton to the phytoplankton assemblage (i.e., Fm, Fn, and Fp) is customized using local HPLC and other data. Exponential functions are tuned to model the size-specific chlorophyll a concentrations (Cm, Cn, and Cp for microphytoplankton, nanophytoplankton, and picophytoplankton, respectively) with remote-sensing reflectance (Rrs) and total chlorophyll a as the model inputs. Such a PSC model shows two improvements over previous models: (1) a practical strategy (i.e., model Cp and Cn first, and then derive Cm as C-Cp-Cn) with an optimized spectral band (680 nm) for Rrs as the model input; (2) local parameterization, including a local chlorophyll a algorithm. The performance of the PSC model is validated using in situ data that were not used in the model development. Application of the PSC model to GOCI (Geostationary Ocean Color Imager) data leads to spatial and temporal distribution patterns of phytoplankton size classes (PSCs) that are consistent with results reported from
Directory of Open Access Journals (Sweden)
Ulrich Hoffrage
2011-02-01
Full Text Available In a series of three experiments, participants made inferences about which one of a pair of two objects scored higher on a criterion. The first experiment was designed to contrast the prediction of Probabilistic Mental Model theory (Gigerenzer, Hoffrage, and Kleinbolting, 1991 concerning sampling procedure with the hard-easy effect. The experiment failed to support the theory's prediction that a particular pair of randomly sampled item sets would differ in percentage correct; but the observation that German participants performed practically as well on comparisons between U.S. cities (many of which they did not even recognize than on comparisons between German cities (about which they knew much more ultimately led to the formulation of the recognition heuristic. Experiment 2 was a second, this time successful, attempt to unconfound item difficulty and sampling procedure. In Experiment 3, participants' knowledge and recognition of each city was elicited, and how often this could be used to make an inference was manipulated. Choices were consistent with the recognition heuristic in about 80% of the cases when it discriminated and people had no additional knowledge about the recognized city (and in about 90% when they had such knowledge. The frequency with which the heuristic could be used affected the percentage correct, mean confidence, and overconfidence as predicted. The size of the reference class, which was also manipulated, modified these effects in meaningful and theoretically important ways.
Wollschläger, Jochen; Wiltshire, Karen Helen; Petersen, Wilhelm; Metfies, Katja
2015-05-01
Investigation of phytoplankton biodiversity, ecology, and biogeography is crucial for understanding marine ecosystems. Research is often carried out on the basis of microscopic observations, but due to the limitations of this approach regarding detection and identification of picophytoplankton (0.2-2 μm) and nanophytoplankton (2-20 μm), these investigations are mainly focused on the microphytoplankton (20-200 μm). In the last decades, various methods based on optical and molecular biological approaches have evolved which enable a more rapid and convenient analysis of phytoplankton samples and a more detailed assessment of small phytoplankton. In this study, a selection of these methods (in situ fluorescence, flow cytometry, genetic fingerprinting, and DNA microarray) was placed in complement to light microscopy and HPLC-based pigment analysis to investigate both biomass distribution and community structure of phytoplankton. As far as possible, the size classes were analyzed separately. Investigations were carried out on six cruises in the German Bight in 2010 and 2011 to analyze both spatial and seasonal variability. Microphytoplankton was identified as the major contributor to biomass in all seasons, followed by the nanophytoplankton. Generally, biomass distribution was patchy, but the overall contribution of small phytoplankton was higher in offshore areas and also in areas exhibiting higher turbidity. Regarding temporal development of the community, differences between the small phytoplankton community and the microphytoplankton were found. The latter exhibited a seasonal pattern regarding number of taxa present, alpha- and beta-diversity, and community structure, while for the nano- and especially the picophytoplankton, a general shift in the community between both years was observable without seasonality. Although the reason for this shift remains unclear, the results imply a different response of large and small phytoplankton to environmental influences.
Directory of Open Access Journals (Sweden)
Su-Wei Fan
2005-12-01
Full Text Available A permanent 2.21 ha plot of lowland subtropical rainforest was established at Nanjen Lake of the Nanjenshan Nature Reserve in southern Taiwan. All free-standing woody plants in the plot with DBH 1 cm were identified, measured, tagged, and mapped. A total of 120 tree species (21,592 stems, belonging to 44 families and 83 genera, was recorded. The community structure was characterized by a relative dominance of Castanopsis carlesii in the canopy, Illicium arborescens in the subcanopy, and Psychotria rubra in the understory. The dominant families were Fagaceae, Illiciaceae, Aquifoliaceae, Lauraceae and Theaceae. However, typical species of lowland area in Taiwan, such as members of Euphorbiaceae and Moraceae, were relatively rare. Thus, floristic composition of this area was comparable with that found in some of the subtropical rain forests or even warm-temperate rain forests of the Central Range in Taiwan. The analysis of size-class distributions of individual species showed good recruitment patterns with a rich sapling bank for each species. TWINSPAN analysis revealed four distinct groups of samples, with the ridge top and northwest streamside plant communities representing two opposite extremes of the gradient. The dominant families of the ridge group were Fagaceae, Illiciaceae, Theaceae, Aquifoliaceae and Lauraceae, whereas those dominating the streamside group were Rubiaceae, Araliceae, Lauraceae, Fagaceae, and Staphyleaceae. Most species had a patchy distribution and many were distributed randomly. Among those with a patchy distribution, Cyclobalanopsis championii and Rhododendron simsii only occurred on the ridge top, while Drypetes karapinensis and Ficus fistulosa occurred along the streamside. Illicium arborescens and Ilex cochinchinensis were commonly distributed on the intermediate slope. Species that appeared to be randomly or near-randomly distributed over the plot included Schefflera octophylla and Daphniphyllum glaucescens ssp
DEFF Research Database (Denmark)
van Gemert, Rob; Andersen, Ken Haste
2018-01-01
-in-life density-dependent growth: North Sea plaice (Pleuronectes platessa), Northeast Atlantic (NEA) mackerel (Scomber scombrus), and Baltic sprat (Sprattus sprattus balticus). For all stocks, the model predicts exploitation at MSY with a large size-at-entry into the fishery, indicating that late-in-life density...
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Energy Technology Data Exchange (ETDEWEB)
Magnusson, A. K.; LaGory, K. E.; Hayse, J. W.; Environmental Science Division
2010-06-25
Flaming Gorge Dam, a hydroelectric facility operated by the Bureau of Reclamation (Reclamation), is located on the Green River in Daggett County, northeastern Utah. Until recently, and since the early 1990s, single daily peak releases or steady flows have been the operational pattern of the dam during the winter period. However, releases from Flaming Gorge Reservoir followed a double-peak pattern (two daily flow peaks) during the winters of 2006-2007 and 2008-2009. Because there is little recent long-term history of double-peaking at Flaming Gorge Dam, the potential effects of double-peaking operations on trout body condition in the dam's tailwater are not known. A study plan was developed that identified research activities to evaluate potential effects from winter double-peaking operations (Hayse et al. 2009). Along with other tasks, the study plan identified the need to conduct a statistical analysis of historical trout condition and macroinvertebrate abundance to evaluate the potential effects of hydropower operations. The results from analyses based on the combined size classes of trout (85-630 mm) were presented in Magnusson et al. (2008). The results of this earlier analysis suggested possible relationships between trout condition and flow, but concern that some of the relationships resulted from size-based effects (e.g., apparent changes in condition may have been related to concomitant changes in size distribution, because small trout may have responded differently to flow than large trout) prompted additional analysis of within-size class relationships. This report presents the results of analyses of three different size classes of trout (small: 200-299 mm, medium: 300-399 mm, and large: {ge}400 mm body length). We analyzed historical data to (1) describe temporal patterns and relationships among flows, benthic macroinvertebrate abundance, and condition of brown trout (Salmo trutta) and rainbow trout (Oncorhynchus mykiss) in the tailwaters of Flaming
International Nuclear Information System (INIS)
Magnusson, A.K.; LaGory, K.E.; Hayse, J.W.
2010-01-01
Flaming Gorge Dam, a hydroelectric facility operated by the Bureau of Reclamation (Reclamation), is located on the Green River in Daggett County, northeastern Utah. Until recently, and since the early 1990s, single daily peak releases or steady flows have been the operational pattern of the dam during the winter period. However, releases from Flaming Gorge Reservoir followed a double-peak pattern (two daily flow peaks) during the winters of 2006-2007 and 2008-2009. Because there is little recent long-term history of double-peaking at Flaming Gorge Dam, the potential effects of double-peaking operations on trout body condition in the dam's tailwater are not known. A study plan was developed that identified research activities to evaluate potential effects from winter double-peaking operations (Hayse et al. 2009). Along with other tasks, the study plan identified the need to conduct a statistical analysis of historical trout condition and macroinvertebrate abundance to evaluate the potential effects of hydropower operations. The results from analyses based on the combined size classes of trout (85-630 mm) were presented in Magnusson et al. (2008). The results of this earlier analysis suggested possible relationships between trout condition and flow, but concern that some of the relationships resulted from size-based effects (e.g., apparent changes in condition may have been related to concomitant changes in size distribution, because small trout may have responded differently to flow than large trout) prompted additional analysis of within-size class relationships. This report presents the results of analyses of three different size classes of trout (small: 200-299 mm, medium: 300-399 mm, and large: (ge)400 mm body length). We analyzed historical data to (1) describe temporal patterns and relationships among flows, benthic macroinvertebrate abundance, and condition of brown trout (Salmo trutta) and rainbow trout (Oncorhynchus mykiss) in the tailwaters of Flaming
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Ding, Yongxia; Zhang, Peili
2018-06-12
Problem-based learning (PBL) is an effective and highly efficient teaching approach that is extensively applied in education systems across a variety of countries. This study aimed to investigate the effectiveness of web-based PBL teaching pedagogies in large classes. The cluster sampling method was used to separate two college-level nursing student classes (graduating class of 2013) into two groups. The experimental group (n = 162) was taught using a web-based PBL teaching approach, while the control group (n = 166) was taught using conventional teaching methods. We subsequently assessed the satisfaction of the experimental group in relation to the web-based PBL teaching mode. This assessment was performed following comparison of teaching activity outcomes pertaining to exams and self-learning capacity between the two groups. When compared with the control group, the examination scores and self-learning capabilities were significantly higher in the experimental group (P web-based PBL teaching approach. In a large class-size teaching environment, the web-based PBL teaching approach appears to be more optimal than traditional teaching methods. These results demonstrate the effectiveness of web-based teaching technologies in problem-based learning. Copyright © 2018. Published by Elsevier Ltd.
Energy Technology Data Exchange (ETDEWEB)
McGinnis, R.E.; Spielman, R.S. [Univ. of Pennsylvania School of Medicine, Philadelphia, PA (United States)
1994-09-01
The 5{prime} flanking polymorphism (5{prime}FP), a hypervariable region at the 5{prime} end of the insulin gene, has {open_quotes}class 1{close_quotes} alleles (650-900 bp long) that are in positive linkage disequilibrium with insulin-dependent diabetes mellitus (IDDM). The authors report that precise sizing of the 5{prime}FP yields a bimodal frequency distribution of class 1 allele lengths. Class 1 alleles belonging to the lower component (650-750 bp) of the bimodal distribution were somewhat more highly associated with IDDM than were alleles from the upper component (760-900 bp), but the difference was not statistically significant. They also examined 5{prime}FP length variation in relation to allelic variation at nearby polymorphisms. At biallelic RFLPs on both sides of the 5{prime}FP, they found that one allele exhibits near-total association with the upper component of the 5FP class 1 distribution. Such associations represent a little-known but potentially wide-spread form of linkage disequilibrium. In this type of disequilibrium, a flanking allele has near-complete association with a single mode of VNTR alleles whose lengths represent consecutive numbers of tandem repeats (CNTR). Such extreme disequilibrium between a CNTR mode and flanking alleles may originate and persist because length mutations at some VNTR loci usually add or delete only one or two repeat units. 22 refs., 5 figs., 6 tabs.
Czech Academy of Sciences Publication Activity Database
Francová, Kateřina; Ondračková, Markéta
2013-01-01
Roč. 82, č. 2 (2013), s. 555-568 ISSN 0022-1112 R&D Projects: GA ČR GBP505/12/G112 Institutional support: RVO:68081766 Keywords : energy reserves * parasite-induced mortality * size-dependent * young-of-the-year fish Subject RIV: EG - Zoology Impact factor: 1.734, year: 2013
Energy Technology Data Exchange (ETDEWEB)
Boyaval, S.
2000-06-15
This PhD presents a study on a series of high pressure swirl atomizers dedicated to Gasoline Direct Injection (GDI). Measurements are performed in stationary and pulsed working conditions. A great aspect of this thesis is the development of an original experimental set-up to correct multiple light scattering that biases the drop size distributions measurements obtained with a laser diffraction technique (Malvern 2600D). This technique allows to perform a study of drop size characteristics near the injector tip. Correction factors on drop size characteristics and on the diffracted intensities are defined from the developed procedure. Another point consists in applying the Maximum Entropy Formalism (MEF) to calculate drop size distributions. Comparisons between experimental distributions corrected with the correction factors and the calculated distributions show good agreement. This work points out that the mean diameter D{sub 43}, which is also the mean of the volume drop size distribution, and the relative volume span factor {delta}{sub v} are important characteristics of volume drop size distributions. The end of the thesis proposes to determine local drop size characteristics from a new development of deconvolution technique for line-of-sight scattering measurements. The first results show reliable behaviours of radial evolution of local characteristics. In GDI application, we notice that the critical point is the opening stage of the injection. This study shows clearly the effects of injection pressure and nozzle internal geometry on the working characteristics of these injectors, in particular, the influence of the pre-spray. This work points out important behaviours that the improvement of GDI principle ought to consider. (author)
Cheok, Jessica; Pressey, Robert L; Weeks, Rebecca; Andréfouët, Serge; Moloney, James
2016-01-01
Spatial data characteristics have the potential to influence various aspects of prioritising biodiversity areas for systematic conservation planning. There has been some exploration of the combined effects of size of planning units and level of classification of physical environments on the pattern and extent of priority areas. However, these data characteristics have yet to be explicitly investigated in terms of their interaction with different socioeconomic cost data during the spatial prioritisation process. We quantify the individual and interacting effects of three factors-planning-unit size, thematic resolution of reef classes, and spatial variability of socioeconomic costs-on spatial priorities for marine conservation, in typical marine planning exercises that use reef classification maps as a proxy for biodiversity. We assess these factors by creating 20 unique prioritisation scenarios involving combinations of different levels of each factor. Because output data from these scenarios are analogous to ecological data, we applied ecological statistics to determine spatial similarities between reserve designs. All three factors influenced prioritisations to different extents, with cost variability having the largest influence, followed by planning-unit size and thematic resolution of reef classes. The effect of thematic resolution on spatial design depended on the variability of cost data used. In terms of incidental representation of conservation objectives derived from finer-resolution data, scenarios prioritised with uniform cost outperformed those prioritised with variable cost. Following our analyses, we make recommendations to help maximise the spatial and cost efficiency and potential effectiveness of future marine conservation plans in similar planning scenarios. We recommend that planners: employ the smallest planning-unit size practical; invest in data at the highest possible resolution; and, when planning across regional extents with the intention
Allen, Gregory Harold
Chemical speciation and source apportionment of size fractionated atmospheric aerosols were investigated using laser desorption time-of-flight mass spectrometry (LD TOF-MS) and source apportionment was carried out using carbon-14 accelerator mass spectrometry (14C AMS). Sample collection was carried out using the Davis Rotating-drum Unit for Monitoring impact analyzer in Davis, Colfax, and Yosemite, CA. Ambient atmospheric aerosols collected during the winter of 2010/11 and 2011/12 showed a significant difference in the types of compounds found in the small and large sized particles. The difference was due to the increase number of oxidized carbon species that were found in the small particles size ranges, but not in the large particles size ranges. Overall, the ambient atmospheric aerosols collected during the winter in Davis, CA had and average fraction modern of F14C = 0.753 +/- 0.006, indicating that the majority of the size fractionated particles originated from biogenic sources. Samples collected during the King Fire in Colfax, CA were used to determine the contribution of biomass burning (wildfire) aerosols. Factor analysis was used to reduce the ions found in the LD TOF-MS analysis of the King Fire samples. The final factor analysis generated a total of four factors that explained an overall 83% of the variance in the data set. Two of the factors correlated heavily with increased smoke events during the sample period. The increased smoke events produced a large number of highly oxidized organic aerosols (OOA2) and aromatic compounds that are indicative of biomass burning organic aerosols (WBOA). The signal intensities of the factors generated in the King Fire data were investigated in samples collected in Yosemite and Davis, CA to look at the impact of biomass burning on ambient atmospheric aerosols. In both comparison sample collections the OOA2 and WBOA factors both increased during biomass burning events located near the sampling sites. The correlation
Weng, Yiqun; Colle, Marivi; Wang, Yuhui; Yang, Luming; Rubinstein, Mor; Sherman, Amir; Ophir, Ron; Grumet, Rebecca
2015-09-01
QTL analysis in multi-development stages with different QTL models identified 12 consensus QTLs underlying fruit elongation and radial growth presenting a dynamic view of genetic control of cucumber fruit development. Fruit size is an important quality trait in cucumber (Cucumis sativus L.) of different market classes. However, the genetic and molecular basis of fruit size variations in cucumber is not well understood. In this study, we conducted QTL mapping of fruit size in cucumber using F2, F2-derived F3 families and recombinant inbred lines (RILs) from a cross between two inbred lines Gy14 (North American picking cucumber) and 9930 (North China fresh market cucumber). Phenotypic data of fruit length and diameter were collected at three development stages (anthesis, immature and mature fruits) in six environments over 4 years. QTL analysis was performed with three QTL models including composite interval mapping (CIM), Bayesian interval mapping (BIM), and multiple QTL mapping (MQM). Twenty-nine consistent and distinct QTLs were detected for nine traits from multiple mapping populations and QTL models. Synthesis of information from available fruit size QTLs allowed establishment of 12 consensus QTLs underlying fruit elongation and radial growth, which presented a dynamic view of genetic control of cucumber fruit development. Results from this study highlighted the benefits of QTL analysis with multiple QTL models and different mapping populations in improving the power of QTL detection. Discussion was presented in the context of domestication and diversifying selection of fruit length and diameter, marker-assisted selection of fruit size, as well as identification of candidate genes for fruit size QTLs in cucumber.
U.S. Environmental Protection Agency — A "Class 1" area is a geographic area recognized by the EPA as being of the highest environmental quality and requiring maximum protection. Class I areas are areas...
International Nuclear Information System (INIS)
Kurkela, E.
1997-01-01
(Conference paper). Different energy production systems based on biomass and waste gasification are being developed in Finland. In 1986-1995 the Finnish gasification research and development activities were almost fully devoted to the development of simplified IGCC power systems suitable to large-scale power production based on pressurized fluid-bed gasification, hot gas cleaning and a combined-cycle process. In the 1990's the atmospheric-pressure gasification activities aiming for small and medium size plants were restarted in Finland. Atmospheric-pressure fixed-bed gasification of wood and peat was commercialized for small-scale district heating applications already in the 1980's. Today research and development in this field aims at developing a combined heat and power plant based on the use of cleaned product gas in internal combustion engines. Another objective is to enlarge the feedstock basis of fixed-bed gasifiers, which at present are limited to the use of piece-shaped fuels such as sod peat and wood chips. Intensive research and development is at present in progress in atmospheric-pressure circulating fluidized-bed gasification of biomass residues and wastes. This gasification technology, earlier commercialized for lime-kiln applications, will lead to co-utilization of local residues and wastes in existing pulverized coal fired boilers. The first demonstration plant is under construction in Finland and there are several projects under planning or design phase in different parts of Europe. 48 refs., 1 fig., 1 tab
Hunt, Brian P. V.; Carlotti, François; Donoso, Katty; Pagano, Marc; D'Ortenzio, Fabrizio; Taillandier, Vincent; Conan, Pascal
2017-08-01
Knowledge of the relative contributions of phytoplankton size classes to zooplankton biomass is necessary to understand food-web functioning and response to climate change. During the Deep Water formation Experiment (DEWEX), conducted in the north-west Mediterranean Sea in winter (February) and spring (April) of 2013, we investigated phytoplankton-zooplankton trophic links in contrasting oligotrophic and eutrophic conditions. Size fractionated particulate matter (pico-POM, nano-POM, and micro-POM) and zooplankton (64 to >4000 μm) composition and carbon and nitrogen stable isotope ratios were measured inside and outside the nutrient-rich deep convection zone in the central Liguro-Provencal basin. In winter, phytoplankton biomass was low (0.28 mg m-3) and evenly spread among picophytoplankton, nanophytoplankton, and microphytoplankton. Using an isotope mixing model, we estimated average contributions to zooplankton biomass by pico-POM, nano-POM, and micro-POM of 28, 59, and 15%, respectively. In spring, the nutrient poor region outside the convection zone had low phytoplankton biomass (0.58 mg m-3) and was dominated by pico/nanophytoplankton. Estimated average contributions to zooplankton biomass by pico-POM, nano-POM, and micro-POM were 64, 28 and 10%, respectively, although the model did not differentiate well between pico-POM and nano-POM in this region. In the deep convection zone, spring phytoplankton biomass was high (1.34 mg m-3) and dominated by micro/nano phytoplankton. Estimated average contributions to zooplankton biomass by pico-POM, nano-POM, and micro-POM were 42, 42, and 20%, respectively, indicating that a large part of the microphytoplankton biomass may have remained ungrazed.Plain Language SummaryThe grazing of zooplankton on algal phytoplankton is a critical step in the transfer of energy through all ocean food webs. Although microscopic, phytoplankton span an enormous size range. The smallest picophytoplankton are generally thought to be too
International Nuclear Information System (INIS)
Enslin, J.H.R.
1990-01-01
A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control
Shaikh, Saame Raza; Rockett, Benjamin Drew; Salameh, Muhammad; Carraway, Kristen
2009-09-01
An emerging molecular mechanism by which docosahexaenoic acid (DHA) exerts its effects is modification of lipid raft organization. The biophysical model, based on studies with liposomes, shows that DHA avoids lipid rafts because of steric incompatibility between DHA and cholesterol. The model predicts that DHA does not directly modify rafts; rather, it incorporates into nonrafts to modify the lateral organization and/or conformation of membrane proteins, such as the major histocompatibility complex (MHC) class I. Here, we tested predictions of the model at a cellular level by incorporating oleic acid, eicosapentaenoic acid (EPA), and DHA, compared with a bovine serum albumin (BSA) control, into the membranes of EL4 cells. Quantitative microscopy showed that DHA, but not EPA, treatment, relative to the BSA control diminished lipid raft clustering and increased their size. Approximately 30% of DHA was incorporated directly into rafts without changing the distribution of cholesterol between rafts and nonrafts. Quantification of fluorescence colocalization images showed that DHA selectively altered MHC class I lateral organization by increasing the fraction of the nonraft protein into rafts compared with BSA. Both DHA and EPA treatments increased antibody binding to MHC class I compared with BSA. Antibody titration showed that DHA and EPA did not change MHC I conformation but increased total surface levels relative to BSA. Taken together, our findings are not in agreement with the biophysical model. Therefore, we propose a model that reconciles contradictory viewpoints from biophysical and cellular studies to explain how DHA modifies lipid rafts on several length scales. Our study supports the notion that rafts are an important target of DHA's mode of action.
Aoki, Masahiko; Sato, Mariko; Hirose, Katsumi; Akimoto, Hiroyoshi; Kawaguchi, Hideo; Hatayama, Yoshiomi; Ono, Shuichi; Takai, Yoshihiro
2015-04-22
Radiation-induced rib fracture after stereotactic body radiotherapy (SBRT) for lung cancer has been recently reported. However, incidence of radiation-induced rib fracture after SBRT using moderate fraction sizes with a long-term follow-up time are not clarified. We examined incidence and risk factors of radiation-induced rib fracture after SBRT using moderate fraction sizes for the patients with peripherally located lung tumor. During 2003-2008, 41 patients with 42 lung tumors were treated with SBRT to 54-56 Gy in 9-7 fractions. The endpoint in the study was radiation-induced rib fracture detected by CT scan after the treatment. All ribs where the irradiated doses were more than 80% of prescribed dose were selected and contoured to build the dose-volume histograms (DVHs). Comparisons of the several factors obtained from the DVHs and the probabilities of rib fracture calculated by Kaplan-Meier method were performed in the study. Median follow-up time was 68 months. Among 75 contoured ribs, 23 rib fractures were observed in 34% of the patients during 16-48 months after SBRT, however, no patients complained of chest wall pain. The 4-year probabilities of rib fracture for maximum dose of ribs (Dmax) more than and less than 54 Gy were 47.7% and 12.9% (p = 0.0184), and for fraction size of 6, 7 and 8 Gy were 19.5%, 31.2% and 55.7% (p = 0.0458), respectively. Other factors, such as D2cc, mean dose of ribs, V10-55, age, sex, and planning target volume were not significantly different. The doses and fractionations used in this study resulted in no clinically significant rib fractures for this population, but that higher Dmax and dose per fraction treatments resulted in an increase in asymptomatic grade 1 rib fractures.
International Nuclear Information System (INIS)
Aoki, Masahiko; Sato, Mariko; Hirose, Katsumi; Akimoto, Hiroyoshi; Kawaguchi, Hideo; Hatayama, Yoshiomi; Ono, Shuichi; Takai, Yoshihiro
2015-01-01
Radiation-induced rib fracture after stereotactic body radiotherapy (SBRT) for lung cancer has been recently reported. However, incidence of radiation-induced rib fracture after SBRT using moderate fraction sizes with a long-term follow-up time are not clarified. We examined incidence and risk factors of radiation-induced rib fracture after SBRT using moderate fraction sizes for the patients with peripherally located lung tumor. During 2003–2008, 41 patients with 42 lung tumors were treated with SBRT to 54–56 Gy in 9–7 fractions. The endpoint in the study was radiation-induced rib fracture detected by CT scan after the treatment. All ribs where the irradiated doses were more than 80% of prescribed dose were selected and contoured to build the dose-volume histograms (DVHs). Comparisons of the several factors obtained from the DVHs and the probabilities of rib fracture calculated by Kaplan-Meier method were performed in the study. Median follow-up time was 68 months. Among 75 contoured ribs, 23 rib fractures were observed in 34% of the patients during 16–48 months after SBRT, however, no patients complained of chest wall pain. The 4-year probabilities of rib fracture for maximum dose of ribs (Dmax) more than and less than 54 Gy were 47.7% and 12.9% (p = 0.0184), and for fraction size of 6, 7 and 8 Gy were 19.5%, 31.2% and 55.7% (p = 0.0458), respectively. Other factors, such as D2cc, mean dose of ribs, V10–55, age, sex, and planning target volume were not significantly different. The doses and fractionations used in this study resulted in no clinically significant rib fractures for this population, but that higher Dmax and dose per fraction treatments resulted in an increase in asymptomatic grade 1 rib fractures
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Directory of Open Access Journals (Sweden)
Miriam P. Albrecht
Full Text Available We report the consumption of scales and other food resources by the facultative lepidophage Roeboides affinis in the upper Tocantins River where it was impounded by the Serra da Mesa Hydroelectric Dam. We compared the diet among size classes, between dry and wet seasons, and between sites with distinct water flow characteristics (lotic vs. lentic related to the distance from the dam and phase of reservoir development. As transparency and fish abundance increased after impoundment, we expected a higher consumption of scales in lentic sites. Likewise, habitat contraction, higher transparency and decrease in terrestrial resources availability, would promote a higher consumption of scales. Scales were consumed by 92% of individuals and represented 26% of the total volume of resources ingested by R. affinis. Diet composition varied significantly among size classes, with larger individuals consuming more scales and larger items, especially odonatans and ephemeropterans. Scale consumption was not significantly different between dry and wet seasons. Roeboides affinis incorporated some food items into the diet as a response to the impoundment, like other species. Scale consumption was higher in lotic sites, refuting our initial hypothesis, what suggests that the lepidophagous habit is related the rheophilic nature of R. affinis.Caracterizamos o consumo de escamas e outros recursos alimentares por Roeboides affinis, um lepidófago facultativo, no alto rio Tocantins, na região represada pela Usina Hidrelétrica de Serra da Mesa. A dieta foi avaliada em relação a classes de tamanho, estações chuvosa e seca, e entre locais com características distintas de fluxo d'água (lótico vs. lêntico relacionadas com a distância da barragem e fase de desenvolvimento do reservatório. Com o aumento da abundância de peixes e da transparência da água após o represamento, esperamos um maior consumo de escamas nos locais lênticos. Da mesma forma, na época seca
Kyriakis, Efstathios; Psomopoulos, Constantinos; Kokkotis, Panagiotis; Bourtsalas, Athanasios; Themelis, Nikolaos
2017-06-23
This study attempts the development of an algorithm in order to present a step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy, also considering the basic obstacle which is in many cases, the gate fee. Various parameters identified and evaluated in order to formulate the proposed decision making method in the form of an algorithm. The principle simulation input is the amount of municipal solid wastes (MSW) available for incineration and along with its net calorific value are the most important factors for the feasibility of the plant. Moreover, the research is focused both on the parameters that could increase the energy production and those that affect the R1 energy efficiency factor. Estimation of the final gate fee is achieved through the economic analysis of the entire project by investigating both expenses and revenues which are expected according to the selected site and outputs of the facility. In this point, a number of commonly revenue methods were included in the algorithm. The developed algorithm has been validated using three case studies in Greece-Athens, Thessaloniki, and Central Greece, where the cities of Larisa and Volos have been selected for the application of the proposed decision making tool. These case studies were selected based on a previous publication made by two of the authors, in which these areas where examined. Results reveal that the development of a «solid» methodological approach in selecting the site and the size of waste-to-energy (WtE) facility can be feasible. However, the maximization of the energy efficiency factor R1 requires high utilization factors while the minimization of the final gate fee requires high R1 and high metals recovery from the bottom ash as well as economic exploitation of recovered raw materials if any.
Salgado, Robina
No Child Left Behind Act (NCLB) was signed into law in 2002 with the idea that all students, no matter the circumstances can learn and that highly qualified teachers should be present in every classrooms (United Stated Department of Education, 2011). The mandates of NCLB also forced states to begin measuring the progress of science proficiency beginning in 2007. The study determined the effects of teacher efficacy, the type of certification route taken by individuals, the number of content hours taken in the sciences, field-based experience and class size on middle school student achievement as measured by the 8th grade STAAR in a region located in South Texas. This data provides knowledge into the effect different teacher training methods have on secondary school science teacher efficacy in Texas and how it impacts student achievement. Additionally, the results of the study determined if traditional and alternative certification programs are equally effective in properly preparing science teachers for the classroom. The study described was a survey design comparing nonequivalent groups. The study utilized the Science Teaching Efficacy Belief Instrument (STEBI). A 25-item efficacy scale made up of two subscales, Personal Science Teaching Efficacy Belief (PSTE) and Science Teaching Outcome Expectancy (STOE) (Bayraktar, 2011). Once the survey was completed a 3-Way ANOVA, MANOVA, and Multiple Linear Regression were performed in SPSS to calculate the results. The results from the study indicated no significant difference between route of certification on student achievement, but a large effect size was reported, 17% of the variances in student achievement can be accounted for by route of certification. A MANOVA was conducted to assess the differences between number of science content hours on a linear combination of personal science teacher efficacy, science teaching outcome expectancy and total science teacher efficacy as measured by the STEBI. No significant
Winslow, Luke A.; Read, Jordan S.; Hanson, Paul C.; Stanley, Emily H.
2014-01-01
With lake abundances in the thousands to millions, creating an intuitive understanding of the distribution of morphology and processes in lakes is challenging. To improve researchers’ understanding of large-scale lake processes, we developed a parsimonious mathematical model based on the Pareto distribution to describe the distribution of lake morphology (area, perimeter and volume). While debate continues over which mathematical representation best fits any one distribution of lake morphometric characteristics, we recognize the need for a simple, flexible model to advance understanding of how the interaction between morphometry and function dictates scaling across large populations of lakes. These models make clear the relative contribution of lakes to the total amount of lake surface area, volume, and perimeter. They also highlight the critical thresholds at which total perimeter, area and volume would be evenly distributed across lake size-classes have Pareto slopes of 0.63, 1 and 1.12, respectively. These models of morphology can be used in combination with models of process to create overarching “lake population” level models of process. To illustrate this potential, we combine the model of surface area distribution with a model of carbon mass accumulation rate. We found that even if smaller lakes contribute relatively less to total surface area than larger lakes, the increasing carbon accumulation rate with decreasing lake size is strong enough to bias the distribution of carbon mass accumulation towards smaller lakes. This analytical framework provides a relatively simple approach to upscaling morphology and process that is easily generalizable to other ecosystem processes.
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
International Nuclear Information System (INIS)
Anon.
1979-01-01
This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed
Jarman, Del W.; Boyland, Lori G.
2011-01-01
In recent years, economic downturn and changes to Indiana's school funding have resulted in significant financial reductions in General Fund allocations for many of Indiana's public school corporations. The main purpose of this statewide study is to examine the possible impacts of these budget reductions on class size and student achievement. This…
Energy Technology Data Exchange (ETDEWEB)
Kenyon, Scott J. [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Bromley, Benjamin C., E-mail: skenyon@cfa.harvard.edu, E-mail: bromley@physics.utah.edu [Department of Physics, University of Utah, 201 JFB, Salt Lake City, UT 84112 (United States)
2012-03-15
We investigate whether coagulation models of planet formation can explain the observed size distributions of trans-Neptunian objects (TNOs). Analyzing published and new calculations, we demonstrate robust relations between the size of the largest object and the slope of the size distribution for sizes 0.1 km and larger. These relations yield clear, testable predictions for TNOs and other icy objects throughout the solar system. Applying our results to existing observations, we show that a broad range of initial disk masses, planetesimal sizes, and fragmentation parameters can explain the data. Adding dynamical constraints on the initial semimajor axis of 'hot' Kuiper Belt objects along with probable TNO formation times of 10-700 Myr restricts the viable models to those with a massive disk composed of relatively small (1-10 km) planetesimals.
Maximum Acceleration Recording Circuit
Bozeman, Richard J., Jr.
1995-01-01
Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.
Maximum Quantum Entropy Method
Sim, Jae-Hoon; Han, Myung Joon
2018-01-01
Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...
International Nuclear Information System (INIS)
Biondi, L.
1998-01-01
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Robust Maximum Association Estimators
A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)
2017-01-01
textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation
Borucki, William; Koch, David; Lissauer, Jack; Basri, Gibor; Caldwell, John; Cochran, William; Dunham, Edward W.; Gilliland, Ronald; Caldwell, Douglas; Kondo, Yoji;
2002-01-01
The first step in discovering the extent of life in our galaxy is to determine the number of terrestrial planets in the habitable zone (HZ). The Kepler Mission is designed around a 0.95 in aperture Schmidt-type telescope with an array of 42 CCDs designed to continuously monitor the brightness of 100,000 solar-like stars to detect the transits of Earth-size and larger planets. The photometer is scheduled to be launched into heliocentric orbit in 2007. Measurements of the depth and repetition time of transits provide the size of the planet relative to the star and its orbital period. When combined with ground-based spectroscopy of these stars to fix the stellar parameters, the true planet radius and orbit scale, hence the position relative to the HZ are determined. These spectra are also used to discover the relationships between the characteristics of planets and the stars they orbit. In particular, the association of planet size and occurrence frequency with stellar mass and metallicity will be investigated. At the end of the four year mission, hundreds of terrestrial planets should be discovered in and near the HZ of their stars if such planets are common. Extending the mission to six years doubles the expected number of Earth-size planets in the HZ. A null result would imply that terrestrial planets in the HZ occur in less than 1% of the stars and that life might be quite rare. Based on the results of the current Doppler-velocity discoveries, detection of a thousand giant planets is expected. Information on their albedos and densities of those giants showing transits will be obtained.
Caracterização química e mineralógica de agregados de diferentes classes de tamanho de Latossolos Bruno e Vermelho localizados no estado do Paraná Chemical and mineralogical characterization of the different structure size classes of Red-Yellow and Dusky Red Latosols in Paraná, Brazil
Directory of Open Access Journals (Sweden)
Vander de Freitas Melo
2008-02-01
Full Text Available O teor e a forma dos minerais da fração argila são determinantes na definição da morfologia dos agregados do solo. Objetivando estudar a mineralogia da fração argila e as propriedades químicas de diferentes classes de agregados de Latossolos (Latossolo Bruno Ácrico húmico - LBd e Latossolo Vermelho Distroférrico húmico - LVdf originados de rochas basálticas no Estado do Paraná, coletaram-se amostras indeformadas em diferentes profundidades (horizontes Bw1 e Bw2 em perfis de solos localizados em duas toposseqüências (quatro perfis no LBd e três no LVdf. Após secagem e separação das amostras indeformadas em seis classes de agregados (2-4; 1-2; 0,5-1; 0,25-0,5; 0,105-0,25; The content and shap of clay minerals are important in the definition of soil structure morphology. To evaluate the clay mineralogy and chemical properties of different aggregate size-classes of Latosols (Red-Yellow - LBd and Dusky Red - LVdf derived from basalt in the state of Paraná, Brazil, soil samples of the Bw1 and Bw2 horizons were collected in four LBd and three LVdf profiles, distributed across two distinct toposequences. Dried and undisturbed soil samples were separated into six size-classes (2-4; 1-2; 0.5-1; 0.25-0.5; 0.105-0.25; < 0.105 mm and the soluble Si in 0,5 mol L-1 acetic acid and exchangeable K, Ca, Mg and Al contents were determined. The clay fraction extracted from each aggregate size-class was investigated by X-ray diffraction, thermal analysis and chemical analysis. The content of exchangeable elements did not vary among the aggregate size-classes in the Bw1 and Bw2 horizons for Red-Yellow and Dusky Red Latosol profiles. In spite of the high and continuous weathering of these soils the mineralogical characteristics of the aggregate clay fraction were not homogenized. The highest variation in the mineral contents, according to the aggregate size class, was observed for the profile in the highest position of the LBd toposequence; the
Accurate modeling and maximum power point detection of ...
African Journals Online (AJOL)
Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.
DEFF Research Database (Denmark)
Gasiunas, Vaidas; Mezini, Mira; Ostermann, Klaus
2007-01-01
of dependent classes and a machine-checked type soundness proof in Isabelle/HOL [29], the first of this kind for a language with virtual classes and path-dependent types. [29] T.Nipkow, L.C. Poulson, and M. Wenzel. Isabelle/HOL -- A Proof Assistant for Higher-Order Logic, volume 2283 of LNCS, Springer, 2002......Virtual classes allow nested classes to be refined in subclasses. In this way nested classes can be seen as dependent abstractions of the objects of the enclosing classes. Expressing dependency via nesting, however, has two limitations: Abstractions that depend on more than one object cannot...... be modeled and a class must know all classes that depend on its objects. This paper presents dependent classes, a generalization of virtual classes that expresses similar semantics by parameterization rather than by nesting. This increases expressivity of class variations as well as the flexibility...
Dasgupta, Debayan; Nath, Sujit; Bhanja, Dipankar
2018-04-01
Twin fluid atomizers utilize the kinetic energy of high speed gases to disintegrate a liquid sheet into fine uniform droplets. Quite often, the gas streams are injected at unequal velocities to enhance the aerodynamic interaction between the liquid sheet and surrounding atmosphere. In order to improve the mixing characteristics, practical atomizers confine the gas flows within ducts. Though the liquid sheet coming out of an injector is usually annular in shape, it can be considered to be planar as the mean radius of curvature is much larger than the sheet thickness. There are numerous studies on breakup of the planar liquid sheet, but none of them considered the simultaneous effects of confinement and unequal gas velocities on the spray characteristics. The present study performs a nonlinear temporal analysis of instabilities in the planar liquid sheet, produced by two co-flowing gas streams moving with unequal velocities within two solid walls. The results show that the para-sinuous mode dominates the breakup process at all flow conditions over the para-varicose mode of breakup. The sheet pattern is strongly influenced by gas velocities, particularly for the para-varicose mode. Spray characteristics are influenced by both gas velocity and proximity to the confining wall, but the former has a much more pronounced effect on droplet size. An increase in the difference between gas velocities at two interfaces drastically shifts the droplet size distribution toward finer droplets. Moreover, asymmetry in gas phase velocities affects the droplet velocity distribution more, only at low liquid Weber numbers for the input conditions chosen in the present study.
International Nuclear Information System (INIS)
Ponman, T.J.
1984-01-01
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Prasser, H.M.; Krepper, E.; Lucas, D. [Forschungszentrum Rossendorf e.V., Dresden (Germany)
2002-01-01
The wire-mesh sensor developed by the Forschungszentrum Rossendorf produces sequences of instantaneous gas fraction distributions in a cross section with a time resolution of 1200 frames per second and a spatial resolution of about 2-3 mm. At moderate flow velocities (up to 1-2 m.s{sup -1}), bubble size distributions can be obtained, since each individual bubble is mapped in several successive distributions. The method was used to study the evolution of the bubble size distribution in a vertical two-phase flow. For this purpose, the sensor was placed downstream of an air injector, the distance between air injection and sensor was varied. The bubble identification algorithm allows to select bubbles of a given range of the effective diameter and to calculate partial gas fraction profiles for this diameter range. In this way, the different behaviour of small and large bubbles in respect to the action of the lift force was observed in a mixture of small and large bubbles. (authors)
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.
2009-01-01
We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.
LaPeyre, Megan K.; Eberline, Benjamin S.; Soniat, Thomas M.; La Peyre, Jerome F.
2013-01-01
Understanding how different life history stages are impacted by extreme or stochastic environmental variation is critical for predicting and modeling organism population dynamics. This project examined recruitment, growth, and mortality of seed (25–75 mm) and market (>75 mm) sized oysters along a salinity gradient over two years in Breton Sound, LA. In April 2010, management responses to the Deepwater Horizon oil spill resulted in extreme low salinity (25 °C) significantly and negatively impacted oyster recruitment, survival and growth in 2010, while low salinity (25 °C). With increasing management of our freshwater inputs to estuaries combined with predicted climate changes, how extreme events affect different life history stages is key to understanding variation in population demographics of commercially important species and predicting future populations.
Directory of Open Access Journals (Sweden)
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Probable maximum flood control
International Nuclear Information System (INIS)
DeGabriele, C.E.; Wu, C.L.
1991-11-01
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1988-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
International Nuclear Information System (INIS)
Rust, D.M.
1984-01-01
The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1989-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Functional Maximum Autocorrelation Factors
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
A new class of nontopological solitons
International Nuclear Information System (INIS)
Li Xinzhou; Ni Zhixiang; Zhang Jianzu
1992-09-01
We construct a new class of nontopological solitons with scalar self-interaction term κφ 4 . Because of the scalar self-interaction, there is a maximum size for these objects. There exists a critical value κ crit for the coupling κ. For κ > κ crit there are no stable nontopological solitons. In thin-walled limit, we show the explicit solutions of NTS with scalar self-interaction and/or gauge interaction. In the case of gauged NTS, soliton becomes a superconductor. (author). 11 refs
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Directory of Open Access Journals (Sweden)
Charley E. Westbrook
2015-09-01
Full Text Available We investigate the survivorship, growth and diet preferences of hatchery-raised juvenile urchins, Tripneustes gratilla, to evaluate the efficacy of their use as biocontrol agents in the efforts to reduce alien invasive algae. In flow-through tanks, we measured urchin growth rates, feeding rates and feeding preferences among diets of the most common invasive algae found in Kāneʻohe Bay, Hawaiʻi: Acanthophora spicifera, Gracilaria salicornia, Eucheuma denticulatum and Kappaphycus clade B. Post-transport survivorship of outplanted urchins was measured in paired open and closed cages in three different reef environments (lagoon, reef flat and reef slope for a month. Survivorship in closed cages was highest on the reef flat (∼75%, and intermediate in the lagoon and reef slope (∼50%. In contrast, open cages showed similar survivorship on the reef flat and in the lagoon, but only 20% of juvenile urchins survived in open cages placed on the reef slope. Urchins grew significantly faster on diets of G. salicornia (1.58 mm/week ± 0.14 SE and Kappaphycus clade B (1.69 ± 0.14 mm/wk than on E. denticulatum (0.97 ± 0.14 mm/wk, with intermediate growth when fed on A. spicifera (1.23 ± 0.11 mm/wk. Interestingly, urchins display size-specific feeding preferences. In non-choice feeding trials, small urchins (17.5–22.5 mm test diameter consumed G. salicornia fastest (6.08 g/day ± 0.19 SE, with A. spicifera (4.25 ± 0.02 g/day and Kappaphycus clade B (3.83 ± 0.02 g/day intermediate, and E. denticulatum was clearly the least consumed (2.32 ± 0.37 g/day. Medium-sized (29.8–43.8 mm urchins likewise preferentially consumed G. salicornia (12.60 ± 0.08 g/day, with less clear differences among the other species in which E. denticulatum was still consumed least (9.35 ± 0.90 g/day. In contrast, large urchins (45.0–65.0 mm showed no significant preferences among the different algae species at all (12.43–15.24 g/day. Overall consumption rates in non
International Nuclear Information System (INIS)
Ryan, J.
1981-01-01
By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments
Hacker, Andrew
1976-01-01
Provides critical reviews of three books, "The Political Economy of Social Class", "Ethnicity: Theory and Experience," and "Ethnicity in the United States," focusing on the political economy of social class and ethnicity. (Author/AM)
Characterizing graphs of maximum matching width at most 2
DEFF Research Database (Denmark)
Jeong, Jisu; Ok, Seongmin; Suh, Geewon
2017-01-01
The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...
Maximum entropy networks are more controllable than preferential attachment networks
International Nuclear Information System (INIS)
Hou, Lvlin; Small, Michael; Lao, Songyang
2014-01-01
A maximum entropy (ME) method to generate typical scale-free networks has been recently introduced. We investigate the controllability of ME networks and Barabási–Albert preferential attachment networks. Our experimental results show that ME networks are significantly more easily controlled than BA networks of the same size and the same degree distribution. Moreover, the control profiles are used to provide insight into control properties of both classes of network. We identify and classify the driver nodes and analyze the connectivity of their neighbors. We find that driver nodes in ME networks have fewer mutual neighbors and that their neighbors have lower average degree. We conclude that the properties of the neighbors of driver node sensitively affect the network controllability. Hence, subtle and important structural differences exist between BA networks and typical scale-free networks of the same degree distribution. - Highlights: • The controllability of maximum entropy (ME) and Barabási–Albert (BA) networks is investigated. • ME networks are significantly more easily controlled than BA networks of the same degree distribution. • The properties of the neighbors of driver node sensitively affect the network controllability. • Subtle and important structural differences exist between BA networks and typical scale-free networks
The Maximum Resource Bin Packing Problem
DEFF Research Database (Denmark)
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Credal Networks under Maximum Entropy
Lukasiewicz, Thomas
2013-01-01
We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...
DEFF Research Database (Denmark)
Rijkhoff, Jan
2007-01-01
in grammatical descriptions of some 50 languages, which together constitute a representative sample of the world’s languages (Hengeveld et al. 2004: 529). It appears that there are both quantitative and qualitative differences between word class systems of individual languages. Whereas some languages employ...... a parts-of-speech system that includes the categories Verb, Noun, Adjective and Adverb, other languages may use only a subset of these four lexical categories. Furthermore, quite a few languages have a major word class whose members cannot be classified in terms of the categories Verb – Noun – Adjective...... – Adverb, because they have properties that are strongly associated with at least two of these four traditional word classes (e.g. Adjective and Adverb). Finally, this article discusses some of the ways in which word class distinctions interact with other grammatical domains, such as syntax and morphology....
DEFF Research Database (Denmark)
Aktor, Mikael
2018-01-01
. Although this social structure was ideal in nature and not equally confirmed in other genres of ancient and medieval literature, it has nevertheless had an immense impact on Indian society. The chapter presents an overview of the system with its three privileged classes, the Brahmins, the Kṣatriyas......The notions of class (varṇa) and caste (jāti) run through the dharmaśāstra literature (i.e. Hindu Law Books) on all levels. They regulate marriage, economic transactions, work, punishment, penance, entitlement to rituals, identity markers like the sacred thread, and social interaction in general...
... management options. Breastfeeding basics. Caring for baby at home. Birthing classes are not just for new parents, though. ... midwife. Postpartum care. Caring for your baby at home, including baby first aid. Lamaze One of the most popular birthing techniques in the U.S., Lamaze has been around ...
Class prediction for high-dimensional class-imbalanced data
Directory of Open Access Journals (Sweden)
Lusa Lara
2010-10-01
Full Text Available Abstract Background The goal of class prediction studies is to develop rules to accurately predict the class membership of new samples. The rules are derived using the values of the variables available for each subject: the main characteristic of high-dimensional data is that the number of variables greatly exceeds the number of samples. Frequently the classifiers are developed using class-imbalanced data, i.e., data sets where the number of samples in each class is not equal. Standard classification methods used on class-imbalanced data often produce classifiers that do not accurately predict the minority class; the prediction is biased towards the majority class. In this paper we investigate if the high-dimensionality poses additional challenges when dealing with class-imbalanced prediction. We evaluate the performance of six types of classifiers on class-imbalanced data, using simulated data and a publicly available data set from a breast cancer gene-expression microarray study. We also investigate the effectiveness of some strategies that are available to overcome the effect of class imbalance. Results Our results show that the evaluated classifiers are highly sensitive to class imbalance and that variable selection introduces an additional bias towards classification into the majority class. Most new samples are assigned to the majority class from the training set, unless the difference between the classes is very large. As a consequence, the class-specific predictive accuracies differ considerably. When the class imbalance is not too severe, down-sizing and asymmetric bagging embedding variable selection work well, while over-sampling does not. Variable normalization can further worsen the performance of the classifiers. Conclusions Our results show that matching the prevalence of the classes in training and test set does not guarantee good performance of classifiers and that the problems related to classification with class
Maximum Entropy in Drug Discovery
Directory of Open Access Journals (Sweden)
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
DEFF Research Database (Denmark)
Ejsing-Duun, Stine; Hansbøl, Mikala
Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Directory of Open Access Journals (Sweden)
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Maximum entropy estimation via Gauss-LP quadratures
Thély, Maxime; Sutter, Tobias; Mohajerin Esfahani, P.; Lygeros, John; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri
2017-01-01
We present an approximation method to a class of parametric integration problems that naturally appear when solving the dual of the maximum entropy estimation problem. Our method builds up on a recent generalization of Gauss quadratures via an infinite-dimensional linear program, and utilizes a
Directory of Open Access Journals (Sweden)
Jeane Alves de Almeida
2009-12-01
Full Text Available The objective of this study was to evaluate the productive and agronomic parameters of Panicum maximum cv. Mombaça and the Brachiária brizanta cv. Marandú grown in Oxisoil, and Entsoil under different water schemes. The experiment was conducted in a greenhouse in EMVZ - School of Veterinary Medicine and Zootecnia / Federal University of Tocantins, campus of Araguaína, TO. The completely randomized design in a factorial arrangement (2x4x2, which consisted of two forage species, four levels of water (25%, 50%, 75% and 100% of the capacity of field, two types of soils, with four repetitions, making 62 experimental units. The production of dry mass of Forage (MSF was linear behavior increasing in line with the increase in the percentage of moisture, for the two cultivars in two production cycles, the two types of soil, being that the clay soil in both cultivars returned relative superiority in comparison with the sandy soil. The mean width of leaves are not differentiated (P>0.05 in three major schemes in two cultivars. The same is observed for size of leaves to Mombaça grass, and for the two major Marandú. There was no (P> 0.05 for comparing stem diameter of the largest schemes. In the measures of time it was observed that plants were higher humidity in two largest (P<0.05. To the greatest number of tillers were higher humidity and is differentiated (P <0.05, while the Mombaça had many in first round. The biggest tillering in both genders occurs in water schemes not stress, and positively influence the characteristics productive, as well as in agronomic stressful when compared to schemes.O objetivo deste trabalho foi avaliar os parâmetros produtivos e agronômicos do Panicum maximum cv. Mombaça e do Brachiária brizanta cv. Marandú cultivados em Argissolo Vermelho eutroférrico, e Neossolo Quartzarênico órtico sob diferentes regimes hídricos. O experimento foi conduzido em casa de vegetação na EMVZ – Escola de Medicina Veterin
Maximum stellar iron core mass
Indian Academy of Sciences (India)
60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore
A portable storage maximum thermometer
International Nuclear Information System (INIS)
Fayart, Gerard.
1976-01-01
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr
Neutron spectra unfolding with maximum entropy and maximum likelihood
International Nuclear Information System (INIS)
Itoh, Shikoh; Tsunoda, Toshiharu
1989-01-01
A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)
Maximum Margin Clustering of Hyperspectral Data
Niazmardi, S.; Safari, A.; Homayouni, S.
2013-09-01
In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.
On Maximum Entropy and Inference
Directory of Open Access Journals (Sweden)
Luigi Gresele
2017-11-01
Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.
Maximum Water Hammer Sensitivity Analysis
Jalil Emadi; Abbas Solemani
2011-01-01
Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...
Directory of Open Access Journals (Sweden)
Yunfeng Shan
2008-01-01
Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the ﬁnding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reﬂects the phylogenetic relationship among species in comparison.
LCLS Maximum Credible Beam Power
International Nuclear Information System (INIS)
Clendenin, J.
2005-01-01
The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed
Stability of latent class segments over time
DEFF Research Database (Denmark)
Mueller, Simone
2011-01-01
Dynamic stability, as the degree to which identified segments at a given time remain unchanged over time in terms of number, size and profile, is a desirable segment property which has received limited attention so far. This study addresses the question to what degree latent classes identified from...... logit model suggests significant changes in the price sensitivity and the utility from environmental claims between both experimental waves. A pooled scale adjusted latent class model is estimated jointly over both waves and the relative size of latent classes is compared across waves, resulting...... in significant differences in the size of two out of seven classes. These differences can largely be accounted for by the changes on the aggregated level. The relative size of latent classes is correlated at 0.52, suggesting a fair robustness. An ex-post characterisation of latent classes by behavioural...
Constraints on the adult-offspring size relationship in protists.
Caval-Holme, Franklin; Payne, Jonathan; Skotheim, Jan M
2013-12-01
The relationship between adult and offspring size is an important aspect of reproductive strategy. Although this filial relationship has been extensively examined in plants and animals, we currently lack comparable data for protists, whose strategies may differ due to the distinct ecological and physiological constraints on single-celled organisms. Here, we report measurements of adult and offspring sizes in 3888 species and subspecies of foraminifera, a class of large marine protists. Foraminifera exhibit a wide range of reproductive strategies; species of similar adult size may have offspring whose sizes vary 100-fold. Yet, a robust pattern emerges. The minimum (5th percentile), median, and maximum (95th percentile) offspring sizes exhibit a consistent pattern of increase with adult size independent of environmental change and taxonomic variation over the past 400 million years. The consistency of this pattern may arise from evolutionary optimization of the offspring size-fecundity trade-off and/or from cell-biological constraints that limit the range of reproductive strategies available to single-celled organisms. When compared with plants and animals, foraminifera extend the evidence that offspring size covaries with adult size across an additional five orders of magnitude in organism size. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.
Generic maximum likely scale selection
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...
International Nuclear Information System (INIS)
Williams, J.A.
1979-09-01
The ductile fracture toughness, J/sub Ic/, of ASTM A 533, Grade B, Class 1 and ASTM A 533, heat treated to simulate irradiation, was determined for 10- to 100-mm thick compact specimens. The toughness at maximum specimen load was also measured to determine the conservatism of J/sub Ic/. The toughness of ASTM A 533, Grade B, Class 1 steel was 349 kJ/m 2 and at the equivalent upper shelf temperature, the heat treated material exhibited 87 kJ/m 2 . The maximum load fracture toughness was found to be linearly proportional to specimen size, and only specimens which failed to meet ASTM size criteria exhibited maximum load toughness less than J/sub Ic/
Extreme Maximum Land Surface Temperatures.
Garratt, J. R.
1992-09-01
There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).
The linear sizes tolerances and fits system modernization
Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.
2018-04-01
The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.
Avinash-Shukla mass limit for the maximum dust mass supported against gravity by electric fields
Avinash, K.
2010-08-01
The existence of a new class of astrophysical objects, where gravity is balanced by the shielded electric fields associated with the electric charge on the dust, is shown. Further, a mass limit MA for the maximum dust mass that can be supported against gravitational collapse by these fields is obtained. If the total mass of the dust in the interstellar cloud MD > MA, the dust collapses, while if MD < MA, stable equilibrium may be achieved. Heuristic arguments are given to show that the physics of the mass limit is similar to the Chandrasekar's mass limit for compact objects and the similarity of these dust configurations with neutron and white dwarfs is pointed out. The effect of grain size distribution on the mass limit and strong correlation effects in the core of such objects is discussed. Possible location of these dust configurations inside interstellar clouds is pointed out.
Efficiency of autonomous soft nanomachines at maximum power.
Seifert, Udo
2011-01-14
We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.
Efficient heuristics for maximum common substructure search.
Englert, Péter; Kovács, Péter
2015-05-26
Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.
System for memorizing maximum values
Bozeman, Richard J., Jr.
1992-08-01
The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.
Remarks on the maximum luminosity
Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon
2018-04-01
The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.
Scintillation counter, maximum gamma aspect
International Nuclear Information System (INIS)
Thumim, A.D.
1975-01-01
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
Determination of the maximum-depth to potential field sources by a maximum structural index method
Fedi, M.; Florio, G.
2013-01-01
A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.
Criticality predicts maximum irregularity in recurrent networks of excitatory nodes.
Directory of Open Access Journals (Sweden)
Yahya Karimipanah
Full Text Available A rigorous understanding of brain dynamics and function requires a conceptual bridge between multiple levels of organization, including neural spiking and network-level population activity. Mounting evidence suggests that neural networks of cerebral cortex operate at a critical regime, which is defined as a transition point between two phases of short lasting and chaotic activity. However, despite the fact that criticality brings about certain functional advantages for information processing, its supporting evidence is still far from conclusive, as it has been mostly based on power law scaling of size and durations of cascades of activity. Moreover, to what degree such hypothesis could explain some fundamental features of neural activity is still largely unknown. One of the most prevalent features of cortical activity in vivo is known to be spike irregularity of spike trains, which is measured in terms of the coefficient of variation (CV larger than one. Here, using a minimal computational model of excitatory nodes, we show that irregular spiking (CV > 1 naturally emerges in a recurrent network operating at criticality. More importantly, we show that even at the presence of other sources of spike irregularity, being at criticality maximizes the mean coefficient of variation of neurons, thereby maximizing their spike irregularity. Furthermore, we also show that such a maximized irregularity results in maximum correlation between neuronal firing rates and their corresponding spike irregularity (measured in terms of CV. On the one hand, using a model in the universality class of directed percolation, we propose new hallmarks of criticality at single-unit level, which could be applicable to any network of excitable nodes. On the other hand, given the controversy of the neural criticality hypothesis, we discuss the limitation of this approach to neural systems and to what degree they support the criticality hypothesis in real neural networks. Finally
Maximum entropy and Bayesian methods
International Nuclear Information System (INIS)
Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.
1992-01-01
Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come
Bracey, Gerald W.
2000-01-01
Alan Krueger's reanalyses of Eric Hanushek's school-productivity data show that Hanushek's "money doesn't matter" conclusions (influential in several states' education-finance hearings) have no factual basis. Hanushek excluded Tennessee's student/teacher ratio study (Project STAR). Also, class size is influencing students' success in…
DEFF Research Database (Denmark)
Ernst, Erik; Ostermann, Klaus; Cook, William Randall
2006-01-01
Virtual classes are class-valued attributes of objects. Like virtual methods, virtual classes are defined in an object's class and may be redefined within subclasses. They resemble inner classes, which are also defined within a class, but virtual classes are accessed through object instances...... model for virtual classes has been a long-standing open question. This paper presents a virtual class calculus, vc, that captures the essence of virtual classes in these full-fledged programming languages. The key contributions of the paper are a formalization of the dynamic and static semantics of vc...
Maximum entropy principal for transportation
International Nuclear Information System (INIS)
Bilich, F.; Da Silva, R.
2008-01-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
New shower maximum trigger for electrons and photons at CDF
International Nuclear Information System (INIS)
Amidei, D.; Burkett, K.; Gerdes, D.; Miao, C.; Wolinski, D.
1994-01-01
For the 1994 Tevatron collider run, CDF has upgraded the electron and photo trigger hardware to make use of shower position and size information from the central shower maximum detector. For electrons, the upgrade has resulted in a 50% reduction in backgrounds while retaining approximately 90% of the signal. The new trigger also eliminates the background to photon triggers from single-phototube spikes
New shower maximum trigger for electrons and photons at CDF
International Nuclear Information System (INIS)
Gerdes, D.
1994-08-01
For the 1994 Tevatron collider run, CDF has upgraded the electron and photon trigger hardware to make use of shower position and size information from the central shower maximum detector. For electrons, the upgrade has resulted in a 50% reduction in backgrounds while retaining approximately 90% of the signal. The new trigger also eliminates the background to photon triggers from single-phototube discharge
On the size of edge chromatic 5-critical graphs
Directory of Open Access Journals (Sweden)
K. Kayathri
2017-04-01
Full Text Available In this paper, we study the size of edge chromatic 5-critical graphs in several classes of 5-critical graphs. In most of the classes of 5-critical graphs in this paper, we have obtained their exact size and in the other classes of 5-critical graphs, we give new bounds on their number of major vertices and size.
International Nuclear Information System (INIS)
Wang, P.-Y.; Hou, S.-S.
2005-01-01
In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions
49 CFR 172.446 - CLASS 9 label.
2010-10-01
... the six white spaces between them. The lower half of the label must be white with the class number “9... 49 Transportation 2 2010-10-01 2010-10-01 false CLASS 9 label. 172.446 Section 172.446... SECURITY PLANS Labeling § 172.446 CLASS 9 label. (a) Except for size and color, the “CLASS 9...
Last Glacial Maximum Salinity Reconstruction
Homola, K.; Spivack, A. J.
2016-12-01
It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were
Maximum Parsimony on Phylogenetic networks
2012-01-01
Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are
What controls the maximum magnitude of injection-induced earthquakes?
Eaton, D. W. S.
2017-12-01
Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum
A new Class of Extremal Composites
DEFF Research Database (Denmark)
Sigmund, Ole
2000-01-01
microstructure belonging to the new class of composites has maximum bulk modulus and lower shear modulus than any previously known composite. Inspiration for the new composite class comes from a numerical topology design procedure which solves the inverse homogenization problem of distributing two isotropic......The paper presents a new class of two-phase isotropic composites with extremal bulk modulus. The new class consists of micro geometrics for which exact solutions can be proven and their bulk moduli are shown to coincide with the Hashin-Shtrikman bounds. The results hold for two and three dimensions...... and for both well- and non-well-ordered isotropic constituent phases. The new class of composites constitutes an alternative to the three previously known extremal composite classes: finite rank laminates, composite sphere assemblages and Vigdergauz microstructures. An isotropic honeycomb-like hexagonal...
U.S. Department of Health & Human Services — The RxClass Browser is a web application for exploring and navigating through the class hierarchies to find the RxNorm drug members associated with each class....
A maximum likelihood framework for protein design
Directory of Open Access Journals (Sweden)
Philippe Hervé
2006-06-01
Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces
Using Mobile Phone Technology in EFL Classes
Sad, Süleyman Nihat
2008-01-01
Teachers of English as a foreign language (EFL) who want to develop successful lessons face numerous challenges, including large class sizes and inadequate instructional materials and technological support. Another problem is unmotivated students who refuse to participate in class activities. According to Harmer (2007), uncooperative and…
Two-dimensional maximum entropy image restoration
International Nuclear Information System (INIS)
Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.
1977-07-01
An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures
Direct maximum parsimony phylogeny reconstruction from genotype data.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2007-12-05
Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
Biddle, Bruce J.; Berliner, David C.
Interest in class size is widespread today. Debates often take place about "ideal" class size. Controversial efforts to reduce class size have appeared at both the federal level and in various states around the nation. This paper reviews research on class size and discusses findings, how these findings can be explained, and policy implications.…
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Mammographic image restoration using maximum entropy deconvolution
International Nuclear Information System (INIS)
Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R
2004-01-01
An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization
Paving the road to maximum productivity.
Holland, C
1998-01-01
"Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.
Type Ibn Supernovae Show Photometric Homogeneity and Spectral Diversity at Maximum Light
Energy Technology Data Exchange (ETDEWEB)
Hosseinzadeh, Griffin; Arcavi, Iair; McCully, Curtis; Howell, D. Andrew [Las Cumbres Observatory, 6740 Cortona Dr Ste 102, Goleta, CA 93117-5575 (United States); Valenti, Stefano [Department of Physics, University of California, 1 Shields Ave, Davis, CA 95616-5270 (United States); Johansson, Joel [Department of Particle Physics and Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel); Sollerman, Jesper; Fremling, Christoffer; Karamehmetoglu, Emir [Oskar Klein Centre, Department of Astronomy, Stockholm University, Albanova University Centre, SE-106 91 Stockholm (Sweden); Pastorello, Andrea; Benetti, Stefano; Elias-Rosa, Nancy [INAF-Osservatorio Astronomico di Padova, Vicolo dell’Osservatorio 5, I-35122 Padova (Italy); Cao, Yi; Duggan, Gina; Horesh, Assaf [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, Mail Code 249-17, Pasadena, CA 91125 (United States); Cenko, S. Bradley [Astrophysics Science Division, NASA Goddard Space Flight Center, Mail Code 661, Greenbelt, MD 20771 (United States); Clubb, Kelsey I.; Filippenko, Alexei V. [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States); Corsi, Alessandra [Department of Physics, Texas Tech University, Box 41051, Lubbock, TX 79409-1051 (United States); Fox, Ori D., E-mail: griffin@lco.global [Space Telescope Science Institute, 3700 San Martin Dr, Baltimore, MD 21218 (United States); and others
2017-02-20
Type Ibn supernovae (SNe) are a small yet intriguing class of explosions whose spectra are characterized by low-velocity helium emission lines with little to no evidence for hydrogen. The prevailing theory has been that these are the core-collapse explosions of very massive stars embedded in helium-rich circumstellar material (CSM). We report optical observations of six new SNe Ibn: PTF11rfh, PTF12ldy, iPTF14aki, iPTF15ul, SN 2015G, and iPTF15akq. This brings the sample size of such objects in the literature to 22. We also report new data, including a near-infrared spectrum, on the Type Ibn SN 2015U. In order to characterize the class as a whole, we analyze the photometric and spectroscopic properties of the full Type Ibn sample. We find that, despite the expectation that CSM interaction would generate a heterogeneous set of light curves, as seen in SNe IIn, most Type Ibn light curves are quite similar in shape, declining at rates around 0.1 mag day{sup −1} during the first month after maximum light, with a few significant exceptions. Early spectra of SNe Ibn come in at least two varieties, one that shows narrow P Cygni lines and another dominated by broader emission lines, both around maximum light, which may be an indication of differences in the state of the progenitor system at the time of explosion. Alternatively, the spectral diversity could arise from viewing-angle effects or merely from a lack of early spectroscopic coverage. Together, the relative light curve homogeneity and narrow spectral features suggest that the CSM consists of a spatially confined shell of helium surrounded by a less dense extended wind.
Receiver function estimated by maximum entropy deconvolution
Institute of Scientific and Technical Information of China (English)
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Directory of Open Access Journals (Sweden)
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
... of cards One 3-ounce (84 grams) serving of fish is a checkbook One-half cup (40 grams) ... for the smallest size. By eating a small hamburger instead of a large, you will save about 150 calories. ...
Maximum likelihood estimation of phase-type distributions
DEFF Research Database (Denmark)
Esparza, Luz Judith R
for both univariate and multivariate cases. Methods like the EM algorithm and Markov chain Monte Carlo are applied for this purpose. Furthermore, this thesis provides explicit formulae for computing the Fisher information matrix for discrete and continuous phase-type distributions, which is needed to find......This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions...... confidence regions for their estimated parameters. Finally, a new general class of distributions, called bilateral matrix-exponential distributions, is defined. These distributions have the entire real line as domain and can be used, for instance, for modelling. In addition, this class of distributions...
Using Elementary Mechanics to Estimate the Maximum Range of ICBMs
Amato, Joseph
2018-04-01
North Korea's development of nuclear weapons and, more recently, intercontinental ballistic missiles (ICBMs) has added a grave threat to world order. The threat presented by these weapons depends critically on missile range, i.e., the ability to reach North America or Europe while carrying a nuclear warhead. Using the limited information available from near-vertical test flights, how do arms control experts estimate the maximum range of an ICBM? The purpose of this paper is to show, using mathematics and concepts appropriate to a first-year calculus-based mechanics class, how a missile's range can be estimated from the (observable) altitude attained during its test flights. This topic—while grim—affords an ideal opportunity to show students how the application of basic physical principles can inform and influence public policy. For students who are already familiar with Kepler's laws, it should be possible to present in a single class period.
Optimum detection for extracting maximum information from symmetric qubit sets
International Nuclear Information System (INIS)
Mizuno, Jun; Fujiwara, Mikio; Sasaki, Masahide; Akiba, Makoto; Kawanishi, Tetsuya; Barnett, Stephen M.
2002-01-01
We demonstrate a class of optimum detection strategies for extracting the maximum information from sets of equiprobable real symmetric qubit states of a single photon. These optimum strategies have been predicted by Sasaki et al. [Phys. Rev. A 59, 3325 (1999)]. The peculiar aspect is that the detections with at least three outputs suffice for optimum extraction of information regardless of the number of signal elements. The cases of ternary (or trine), quinary, and septenary polarization signals are studied where a standard von Neumann detection (a projection onto a binary orthogonal basis) fails to access the maximum information. Our experiments demonstrate that it is possible with present technologies to attain about 96% of the theoretical limit
A study of long term ageing effects in A533B Class 1 and A508 Class 3 steels
International Nuclear Information System (INIS)
Druce, S.G.
1981-08-01
The effects of long term thermal ageing treatments on notched impact fracture properties has been studied in two commercially produced PWR pressure vessel steels, A533B Class 1 and A508 Class 3. Heat treatments of up to 10,000h duration at temperatures between 300 and 600 0 C have been investigated. Additionally the effects of specimen size, specimen orientation, specimen position from within the plate and the effect of a prior post weld heat treatment on subsequent fracture behaviour following thermal ageing have been evaluated for the A533B Class 1 material. The susceptibility of both materials to temper embrittlement effects is relatively low, the maximum increase in transition temperature following thermal ageing treatments in the temperature range 300 to 500 0 C being about 40 to 45 0 C. Thermal ageing at 600 0 C for times in excess of 100h produces microstructural changes resulting in larger increases in transition temperature. For the A533B material, specimen position and orientation are found to have a large influence on impact behaviour but do not affect the susceptibility to temper embrittlement. Post weld heat treatment has little or no influence on impact fracture behaviour before further isothermal ageing treatments nor on susceptibility to temper embrittlement. (author)
On the Pontryagin maximum principle for systems with delays. Economic applications
Kim, A. V.; Kormyshev, V. M.; Kwon, O. B.; Mukhametshin, E. R.
2017-11-01
The Pontryagin maximum principle [6] is the key stone of finite-dimensional optimal control theory [1, 2, 5]. So beginning with opening the maximum principle it was important to extend the maximum principle on various classes of dynamical systems. In t he paper we consider some aspects of application of i-smooth analysis [3, 4] in the theory of the Pontryagin maximum principle [6] for systems with delays, obtained results can be applied by elaborating optimal program controls in economic models with delays.
Maximum permissible voltage of YBCO coated conductors
Energy Technology Data Exchange (ETDEWEB)
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Robinette, Kathleen M; Veitch, Daisy
2016-08-01
To provide a review of sustainable sizing practices that reduce waste, increase sales, and simultaneously produce safer, better fitting, accommodating products. Sustainable sizing involves a set of methods good for both the environment (sustainable environment) and business (sustainable business). Sustainable sizing methods reduce (1) materials used, (2) the number of sizes or adjustments, and (3) the amount of product unsold or marked down for sale. This reduces waste and cost. The methods can also increase sales by fitting more people in the target market and produce happier, loyal customers with better fitting products. This is a mini-review of methods that result in more sustainable sizing practices. It also reviews and contrasts current statistical and modeling practices that lead to poor fit and sizing. Fit-mapping and the use of cases are two excellent methods suited for creating sustainable sizing, when real people (vs. virtual people) are used. These methods are described and reviewed. Evidence presented supports the view that virtual fitting with simulated people and products is not yet effective. Fit-mapping and cases with real people and actual products result in good design and products that are fit for person, fit for purpose, with good accommodation and comfortable, optimized sizing. While virtual models have been shown to be ineffective for predicting or representing fit, there is an opportunity to improve them by adding fit-mapping data to the models. This will require saving fit data, product data, anthropometry, and demographics in a standardized manner. For this success to extend to the wider design community, the development of a standardized method of data collection for fit-mapping with a globally shared fit-map database is needed. It will enable the world community to build knowledge of fit and accommodation and generate effective virtual fitting for the future. A standardized method of data collection that tests products' fit methodically
DEFF Research Database (Denmark)
Hansen, Pelle Guldborg; Jespersen, Andreas Maaløe; Skov, Laurits Rhoden
2015-01-01
trash bags according to size of plates and weighed in bulk. Results Those eating from smaller plates (n=145) left significantly less food to waste (aver. 14,8g) than participants eating from standard plates (n=75) (aver. 20g) amounting to a reduction of 25,8%. Conclusions Our field experiment tests...... the hypothesis that a decrease in the size of food plates may lead to significant reductions in food waste from buffets. It supports and extends the set of circumstances in which a recent experiment found that reduced dinner plates in a hotel chain lead to reduced quantities of leftovers....
Effect of roasting degree on the antioxidant activity of different Arabica coffee quality classes.
Odžaković, Božana; Džinić, Natalija; Kukrić, Zoran; Grujić, Slavica
2016-01-01
of coffee depends on a variety of bioactive components in coffee beans. Antioxidant activity largely depends on the class of coffee. The coffee samples of 1stclass quality (maximum 8 black beans/300 g from the sample and large bean size) had higher antioxidant activity compared to samples of 2nd quality class (maximum 19 black beans/300 g in the sample and medium-sized beans).
Damascus steel ledeburite class
Sukhanov, D. A.; Arkhangelsky, L. B.; Plotnikova, N. V.
2017-02-01
Discovered that some of blades Damascus steel has an unusual nature of origin of the excess cementite, which different from the redundant phases of secondary cementite, cementite of ledeburite and primary cementite in iron-carbon alloys. It is revealed that the morphological features of separate particles of cementite in Damascus steels lies in the abnormal size of excess carbides having the shape of irregular prisms. Considered three hypotheses for the formation of excess cementite in the form of faceted prismatic of excess carbides. The first hypothesis is based on thermal fission of cementite of a few isolated grains. The second hypothesis is based on the process of fragmentation cementite during deformation to the separate the pieces. The third hypothesis is based on the transformation of metastable cementite in the stable of angular eutectic carbide. It is shown that the angular carbides are formed within the original metastable colony ledeburite, so they are called “eutectic carbide”. It is established that high-purity white cast iron is converted into of Damascus steel during isothermal soaking at the annealing. It was revealed that some of blades Damascus steel ledeburite class do not contain in its microstructure of crushed ledeburite. It is shown that the pattern of carbide heterogeneity of Damascus steel consists entirely of angular eutectic carbides. Believe that Damascus steel refers to non-heat-resistant steel of ledeburite class, which have similar structural characteristics with semi-heat-resistant die steel or heat-resistant high speed steel, differing from them only in the nature of excess carbide phase.
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
Assessment of multi class kinematic wave models
Van Wageningen-Kessels, F.L.M.; Van Lint, J.W.C.; Vuik, C.; Hoogendoorn, S.P.
2012-01-01
In the last decade many multi class kinematic wave (MCKW) traffic ow models have been proposed. MCKW models introduce heterogeneity among vehicles and drivers. For example, they take into account differences in (maximum) velocities and driving style. Nevertheless, the models are macroscopic and the
Brand, Judith, Ed.
1995-01-01
"Exploring" is a magazine of science, art, and human perception that communicates ideas museum exhibits cannot demonstrate easily by using experiments and activities for the classroom. This issue concentrates on size, examining it from a variety of viewpoints. The focus allows students to investigate and discuss interconnections among…
Revealing the Maximum Strength in Nanotwinned Copper
DEFF Research Database (Denmark)
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Modelling maximum canopy conductance and transpiration in ...
African Journals Online (AJOL)
There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...
Thin-plate spline analysis of craniofacial growth in Class I and Class II subjects.
Franchi, Lorenzo; Baccetti, Tiziano; Stahl, Franka; McNamara, James A
2007-07-01
To compare the craniofacial growth characteristics of untreated subjects with Class II division 1 malocclusion with those of subjects with normal (Class I) occlusion from the prepubertal through the postpubertal stages of development. The Class II division 1 sample consisted of 17 subjects (11 boys and six girls). The Class I sample also consisted of 17 subjects (13 boys and four girls). Three craniofacial regions (cranial base, maxilla, and mandible) were analyzed on the lateral cephalograms of the subjects in both groups by means of thin-plate spline analysis at T1 (prepubertal) and T2 (postpubertal). Both cross-sectional and longitudinal comparisons were performed on both size and shape differences between the two groups. The results showed an increased cranial base angulation as a morphological feature of Class II malocclusion at the prepubertal developmental phase. Maxillary changes in either shape or size were not significant. Subjects with Class II malocclusion exhibited a significant deficiency in the size of the mandible at the completion of active craniofacial growth as compared with Class I subjects. A significant deficiency in the size of the mandible became apparent in Class II subjects during the circumpubertal period and it was still present at the completion of active craniofacial growth.
Town of Cary, North Carolina — This data is specific to Parks and Recreation classes, workshops, and activities within the course catalog. It contains an entry for upcoming classes.*This data set...
Class Notes for "Class-Y-News."
Stuart, Judy L.
1991-01-01
A self-contained class of students with mild to moderate disabilities published a monthly newsletter which was distributed to students' families. Students became involved in writing, typing, drawing, folding, basic editing, and disseminating. (JDD)
Classed identities in adolescence
Jay, Sarah
2015-01-01
peer-reviewed The central argument of this thesis is that social class remains a persistent system of inequality in education, health, life chances and opportunities. Therefore class matters. But why is it that so little attention has been paid to class in the psychological literature? Three papers are presented here which draw together theoretical advances in psychological understandings of group processes and sociological understandings of the complexity of class. As western labour marke...
Direct maximum parsimony phylogeny reconstruction from genotype data
Directory of Open Access Journals (Sweden)
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
Energy Technology Data Exchange (ETDEWEB)
Forst, Michael
2012-11-01
The shakeout in the solar cell and module industry is in full swing. While the number of companies and production locations shutting down in the Western world is increasing, the capacity expansion in the Far East seems to be unbroken. Size in combination with a good sales network has become the key to success for surviving in the current storm. The trade war with China already looming on the horizon is adding to the uncertainties. (orig.)
2010-01-01
... the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... Standards for Grades of Apples for Processing Size § 51.344 Size. (a) The minimum and maximum sizes or range...
A Baseline Air Quality Assessment Onboard a Victoria Class Submarine: HMCS Windsor
National Research Council Canada - National Science Library
Severs, Y. D
2006-01-01
.... This trial thus represents a baseline habitability evaluation of Canada's Victoria class submarines to confirm compliance with the current maximum permissible contaminant limits stipulated in the Air...
MXLKID: a maximum likelihood parameter identifier
International Nuclear Information System (INIS)
Gavel, D.T.
1980-07-01
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables
International Nuclear Information System (INIS)
Geis, J.W.
1992-01-01
This paper discusses a Space Power Subsystem Sizing program which has been developed by the Aerospace Power Division of Wright Laboratory, Wright-Patterson Air Force Base, Ohio. The Space Power Subsystem program (SPSS) contains the necessary equations and algorithms to calculate photovoltaic array power performance, including end-of-life (EOL) and beginning-of-life (BOL) specific power (W/kg) and areal power density (W/m 2 ). Additional equations and algorithms are included in the spreadsheet for determining maximum eclipse time as a function of orbital altitude, and inclination. The Space Power Subsystem Sizing program (SPSS) has been used to determine the performance of several candidate power subsystems for both Air Force and SDIO potential applications. Trade-offs have been made between subsystem weight and areal power density (W/m 2 ) as influenced by orbital high energy particle flux and time in orbit
STUDY ON MAXIMUM SPECIFIC SLUDGE ACIVITY OF DIFFERENT ANAEROBIC GRANULAR SLUDGE BY BATCH TESTS
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The maximum specific sludge activity of granular sludge from large-scale UASB, IC and Biobed anaerobic reactors were investigated by batch tests. The limitation factors related to maximum specific sludge activity (diffusion, substrate sort, substrate concentration and granular size) were studied. The general principle and procedure for the precise measurement of maximum specific sludge activity were suggested. The potential capacity of loading rate of the IC and Biobed anaerobic reactors were analyzed and compared by use of the batch tests results.
Loosely coupled class families
DEFF Research Database (Denmark)
Ernst, Erik
2001-01-01
are expressed using virtual classes seem to be very tightly coupled internally. While clients have achieved the freedom to dynamically use one or the other family, it seems that any given family contains a xed set of classes and we will need to create an entire family of its own just in order to replace one...... of the members with another class. This paper shows how to express class families in such a manner that the classes in these families can be used in many dierent combinations, still enabling family polymorphism and ensuring type safety....
Tutte sets in graphs II: The complexity of finding maximum Tutte sets
Bauer, D.; Broersma, Haitze J.; Kahl, N.; Morgana, A.; Schmeichel, E.; Surowiec, T.
2007-01-01
A well-known formula of Tutte and Berge expresses the size of a maximum matching in a graph $G$ in terms of what is usually called the deficiency. A subset $X$ of $V(G)$ for which this deficiency is attained is called a Tutte set of $G$. While much is known about maximum matchings, less is known
Luminosity class of neutron reflectometers
Energy Technology Data Exchange (ETDEWEB)
Pleshanov, N.K., E-mail: pnk@pnpi.spb.ru
2016-10-21
The formulas that relate neutron fluxes at reflectometers with differing q-resolutions are derived. The reference luminosity is defined as a maximum flux for measurements with a standard resolution. The methods of assessing the reference luminosity of neutron reflectometers are presented for monochromatic and white beams, which are collimated with either double diaphragm or small angle Soller systems. The values of the reference luminosity for unified parameters define luminosity class of reflectometers. The luminosity class characterizes (each operation mode of) the instrument by one number and can be used to classify operating reflectometers and optimize designed reflectometers. As an example the luminosity class of the neutron reflectometer NR-4M (reactor WWR-M, Gatchina) is found for four operation modes: 2.1 (monochromatic non-polarized beam), 1.9 (monochromatic polarized beam), 1.5 (white non-polarized beam), 1.1 (white polarized beam); it is shown that optimization of measurements may increase the flux at the sample up to two orders of magnitude with monochromatic beams and up to one order of magnitude with white beams. A fan beam reflectometry scheme with monochromatic neutrons is suggested, and the expected increase in luminosity is evaluated. A tuned-phase chopper with a variable TOF resolution is recommended for reflectometry with white beams.
Maximum neutron flux in thermal reactors
International Nuclear Information System (INIS)
Strugar, P.V.
1968-12-01
Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples
Maximum allowable load on wheeled mobile manipulators
International Nuclear Information System (INIS)
Habibnejad Korayem, M.; Ghariblu, H.
2003-01-01
This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy
Maximum phytoplankton concentrations in the sea
DEFF Research Database (Denmark)
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...
Gait and Function in Class III Obesity
Directory of Open Access Journals (Sweden)
Catherine Ling
2012-01-01
Full Text Available Walking, more specifically gait, is an essential component of daily living. Walking is a very different activity for individuals with a Body Mass Index (BMI of 40 or more (Class III obesity compared with those who are overweight or obese with a BMI between 26–35. Yet all obesity weight classes receive the same physical activity guidelines and recommendations. This observational study examined the components of function and disability in a group with Class III obesity and a group that is overweight or has Class I obesity. Significant differences were found between the groups in the areas of gait, body size, health condition, and activity capacity and participation. The Timed Up and Go test, gait velocity, hip circumference, and stance width appear to be most predictive of activity capacity as observed during gait assessment. The findings indicate that Class III-related gait is pathologic and not a normal adaptation.
Kuzyakov, Yakov; Razavi, Bahar
2017-04-01
Estimation of the soil volume affected by roots - the rhizosphere - is crucial to assess the effects of plants on properties and processes in soils and dynamics of nutrients, water, microorganisms and soil organic matter. The challenges to assess the rhizosphere size are: 1) the continuum of properties between the root surface and root-free soil, 2) differences in the distributions of various properties (carbon, microorganisms and their activities, various nutrients, enzymes, etc.) along and across the roots, 3) temporal changes of properties and processes. Thus, to describe the rhizosphere size and root effects, a holistic approach is necessary. We collected literature and own data on the rhizosphere gradients of a broad range of physico-chemical and biological properties: pH, CO2, oxygen, redox potential, water uptake, various nutrients (C, N, P, K, Ca, Mg, Mn and Fe), organic compounds (glucose, carboxylic acids, amino acids), activities of enzymes of C, N, P and S cycles. The collected data were obtained based on the destructive approaches (thin layer slicing), rhizotron studies and in situ visualization techniques: optodes, zymography, sensitive gels, 14C and neutron imaging. The root effects were pronounced from less than 0.5 mm (nutrients with slow diffusion) up to more than 50 mm (for gases). However, the most common effects were between 1 - 10 mm. Sharp gradients (e.g. for P, carboxylic acids, enzyme activities) allowed to calculate clear rhizosphere boundaries and so, the soil volume affected by roots. The first analyses were done to assess the effects of soil texture and moisture as well as root system and age on these gradients. The most properties can be described by two curve types: exponential saturation and S curve, each with increasing and decreasing concentration profiles from the root surface. The gradient based distribution functions were calculated and used to extrapolate on the whole soil depending on the root density and rooting intensity. We
Peyronie's Reconstruction for Maximum Length and Girth Gain: Geometrical Principles
Directory of Open Access Journals (Sweden)
Paulo H. Egydio
2008-01-01
Full Text Available Peyronie's disease has been associated with penile shortening and some degree of erectile dysfunction. Surgical reconstruction should be based on giving a functional penis, that is, rectifying the penis with rigidity enough to make the sexual intercourse. The procedure should be discussed preoperatively in terms of length and girth reconstruction in order to improve patient satisfaction. The tunical reconstruction for maximum penile length and girth restoration should be based on the maximum length of the dissected neurovascular bundle possible and the application of geometrical principles to define the precise site and size of tunical incision and grafting procedure. As penile rectification and rigidity are required to achieve complete functional restoration of the penis and 20 to 54% of patients experience associated erectile dysfunction, penile straightening alone may not be enough to provide complete functional restoration. Therefore, phosphodiesterase inhibitors, self-injection, or penile prosthesis may need to be added in some cases.
Statistical Inference for a Class of Multivariate Negative Binomial Distributions
DEFF Research Database (Denmark)
Rubak, Ege H.; Møller, Jesper; McCullagh, Peter
This paper considers statistical inference procedures for a class of models for positively correlated count variables called -permanental random fields, and which can be viewed as a family of multivariate negative binomial distributions. Their appealing probabilistic properties have earlier been...... studied in the literature, while this is the first statistical paper on -permanental random fields. The focus is on maximum likelihood estimation, maximum quasi-likelihood estimation and on maximum composite likelihood estimation based on uni- and bivariate distributions. Furthermore, new results...
DEFF Research Database (Denmark)
Harrits, Gitte Sommer
2013-01-01
Even though contemporary discussions of class have moved forward towards recognizing a multidimensional concept of class, empirical analyses tend to focus on cultural practices in a rather narrow sense, that is, as practices of cultural consumption or practices of education. As a result......, discussions within political sociology have not yet utilized the merits of a multidimensional conception of class. In light of this, the article suggests a comprehensive Bourdieusian framework for class analysis, integrating culture as both a structural phenomenon co-constitutive of class and as symbolic...... practice. Further, the article explores this theoretical framework in a multiple correspondence analysis of a Danish survey, demonstrating how class and political practices are indeed homologous. However, the analysis also points at several elements of field autonomy, and the concluding discussion...
Bhanot, Gyan [Princeton, NJ; Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY
2009-09-08
Class network routing is implemented in a network such as a computer network comprising a plurality of parallel compute processors at nodes thereof. Class network routing allows a compute processor to broadcast a message to a range (one or more) of other compute processors in the computer network, such as processors in a column or a row. Normally this type of operation requires a separate message to be sent to each processor. With class network routing pursuant to the invention, a single message is sufficient, which generally reduces the total number of messages in the network as well as the latency to do a broadcast. Class network routing is also applied to dense matrix inversion algorithms on distributed memory parallel supercomputers with hardware class function (multicast) capability. This is achieved by exploiting the fact that the communication patterns of dense matrix inversion can be served by hardware class functions, which results in faster execution times.
Miyamoto, Yuri
2017-12-01
A large body of research in Western cultures has demonstrated the psychological and health effects of social class. This review outlines a cultural psychological approach to social stratification by comparing psychological and health manifestations of social class across Western and East Asian cultures. These comparisons suggest that cultural meaning systems shape how people make meaning and respond to material/structural conditions associated with social class, thereby leading to culturally divergent manifestations of social class. Specifically, unlike their counterparts in Western cultures, individuals of high social class in East Asian cultures tend to show high conformity and other-orientated psychological attributes. In addition, cultures differ in how social class impacts health (i.e. on which bases, through which pathways, and to what extent). Copyright © 2017 Elsevier Ltd. All rights reserved.
Semantic Analysis of Virtual Classes and Nested Classes
DEFF Research Database (Denmark)
Madsen, Ole Lehrmann
1999-01-01
Virtual classes and nested classes are distinguishing features of BETA. Nested classes originated from Simula, but until recently they have not been part of main stream object- oriented languages. C++ has a restricted form of nested classes and they were included in Java 1.1. Virtual classes...... classes and parameterized classes have been made. Although virtual classes and nested classes have been used in BETA for more than a decade, their implementation has not been published. The purpose of this paper is to contribute to the understanding of virtual classes and nested classes by presenting...
DEFF Research Database (Denmark)
Faber, Stine Thidemann; Prieur, Annick
This paper asks how class can have importance in one of the worlds’ most equal societies: Denmark. The answer is that class here appears in disguised forms. The field under study is a city, Aalborg, in the midst of transition from a stronghold of industrialism to a post industrial economy. The pa....... The paper also raises questions about how sociological discourses may contribute to the veiling of class....
Maximum gravitational redshift of white dwarfs
International Nuclear Information System (INIS)
Shapiro, S.L.; Teukolsky, S.A.
1976-01-01
The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores
Maximum entropy analysis of EGRET data
DEFF Research Database (Denmark)
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Shower maximum detector for SDC calorimetry
International Nuclear Information System (INIS)
Ernwein, J.
1994-01-01
A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs
Topics in Bayesian statistics and maximum entropy
International Nuclear Information System (INIS)
Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.
1998-12-01
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
Density estimation by maximum quantum entropy
International Nuclear Information System (INIS)
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-01-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets
Chris B. LeDoux; John E. Baumgras; R. Bryan Selbe
1989-01-01
PROFIT-PC is a menu driven, interactive PC (personal computer) program that estimates optimum product mix and maximum net harvesting revenue based on projected product yields and stump-to-mill timber harvesting costs. Required inputs include the number of trees/acre by species and 2 inches diameter at breast-height class, delivered product prices by species and product...
A Stochastic Maximum Principle for General Mean-Field Systems
International Nuclear Information System (INIS)
Buckdahn, Rainer; Li, Juan; Ma, Jin
2016-01-01
In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.
A Stochastic Maximum Principle for General Mean-Field Systems
Energy Technology Data Exchange (ETDEWEB)
Buckdahn, Rainer, E-mail: Rainer.Buckdahn@univ-brest.fr [Université de Bretagne-Occidentale, Département de Mathématiques (France); Li, Juan, E-mail: juanli@sdu.edu.cn [Shandong University, Weihai, School of Mathematics and Statistics (China); Ma, Jin, E-mail: jinma@usc.edu [University of Southern California, Department of Mathematics (United States)
2016-12-15
In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.
Marginal Maximum Likelihood Estimation of Item Response Models in R
Directory of Open Access Journals (Sweden)
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
Application of the maximum entropy production principle to electrical systems
International Nuclear Information System (INIS)
Christen, Thomas
2006-01-01
For a simple class of electrical systems, the principle of the maximum entropy production rate (MaxEP) is discussed. First, we compare the MaxEP principle and the principle of the minimum entropy production rate and illustrate the superiority of the MaxEP principle for the example of two parallel constant resistors. Secondly, we show that the Steenbeck principle for the electric arc as well as the ohmic contact behaviour of space-charge limited conductors follow from the MaxEP principle. In line with work by Dewar, the investigations seem to suggest that the MaxEP principle can also be applied to systems far from equilibrium, provided appropriate information is available that enters the constraints of the optimization problem. Finally, we apply the MaxEP principle to a mesoscopic system and show that the universal conductance quantum, e 2 /h, of a one-dimensional ballistic conductor can be estimated
Tablante, Courtney B.; Fiske, Susan T.
2015-01-01
Discussing socioeconomic status in college classes can be challenging. Both teachers and students feel uncomfortable, yet social class matters more than ever. This is especially true, given increased income inequality in the United States and indications that higher education does not reduce this inequality as much as many people hope. Resources…
Generalized Fourier transforms classes
DEFF Research Database (Denmark)
Berntsen, Svend; Møller, Steen
2002-01-01
The Fourier class of integral transforms with kernels $B(\\omega r)$ has by definition inverse transforms with kernel $B(-\\omega r)$. The space of such transforms is explicitly constructed. A slightly more general class of generalized Fourier transforms are introduced. From the general theory...
Taylor, Lewis A., III
2012-01-01
An accessible business school population of undergraduate students was investigated in three independent, but related studies to determine effects on grades due to cutting class and failing to take advantage of optional reviews and study quizzes. It was hypothesized that cutting classes harms exam scores, attending preexam reviews helps exam…
Nonsymmetric entropy and maximum nonsymmetric entropy principle
International Nuclear Information System (INIS)
Liu Chengshi
2009-01-01
Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.
Maximum speed of dewetting on a fiber
Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus
2011-01-01
A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed
Maximum potential preventive effect of hip protectors
van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.
2007-01-01
OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who
Maximum gain of Yagi-Uda arrays
DEFF Research Database (Denmark)
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
correlation between maximum dry density and cohesion
African Journals Online (AJOL)
HOD
represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
The maximum-entropy method in superspace
Czech Academy of Sciences Publication Activity Database
van Smaalen, S.; Palatinus, Lukáš; Schneider, M.
2003-01-01
Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003
Achieving maximum sustainable yield in mixed fisheries
Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna
2017-01-01
Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example
5 CFR 534.203 - Maximum stipends.
2010-01-01
... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...
Minimal length, Friedmann equations and maximum density
Energy Technology Data Exchange (ETDEWEB)
Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)
2014-06-16
Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.
Dauns, John
2006-01-01
Because traditional ring theory places restrictive hypotheses on all submodules of a module, its results apply only to small classes of already well understood examples. Often, modules with infinite Goldie dimension have finite-type dimension, making them amenable to use with type dimension, but not Goldie dimension. By working with natural classes and type submodules (TS), Classes of Modules develops the foundations and tools for the next generation of ring and module theory. It shows how to achieve positive results by placing restrictive hypotheses on a small subset of the complement submodules, Furthermore, it explains the existence of various direct sum decompositions merely as special cases of type direct sum decompositions. Carefully developing the foundations of the subject, the authors begin by providing background on the terminology and introducing the different module classes. The modules classes consist of torsion, torsion-free, s[M], natural, and prenatural. They expand the discussion by exploring...
An improved maximum power point tracking method for a photovoltaic system
Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes
2016-06-01
In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.
Application of the maximum entropy method to profile analysis
International Nuclear Information System (INIS)
Armstrong, N.; Kalceff, W.; Cline, J.P.
1999-01-01
Full text: A maximum entropy (MaxEnt) method for analysing crystallite size- and strain-induced x-ray profile broadening is presented. This method treats the problems of determining the specimen profile, crystallite size distribution, and strain distribution in a general way by considering them as inverse problems. A common difficulty faced by many experimenters is their inability to determine a well-conditioned solution of the integral equation, which preserves the positivity of the profile or distribution. We show that the MaxEnt method overcomes this problem, while also enabling a priori information, in the form of a model, to be introduced into it. Additionally, we demonstrate that the method is fully quantitative, in that uncertainties in the solution profile or solution distribution can be determined and used in subsequent calculations, including mean particle sizes and rms strain. An outline of the MaxEnt method is presented for the specific problems of determining the specimen profile and crystallite or strain distributions for the correspondingly broadened profiles. This approach offers an alternative to standard methods such as those of Williamson-Hall and Warren-Averbach. An application of the MaxEnt method is demonstrated in the analysis of alumina size-broadened diffraction data (from NIST, Gaithersburg). It is used to determine the specimen profile and column-length distribution of the scattering domains. Finally, these results are compared with the corresponding Williamson-Hall and Warren-Averbach analyses. Copyright (1999) Australian X-ray Analytical Association Inc
Distribution of phytoplankton groups within the deep chlorophyll maximum
Latasa, Mikel
2016-11-01
The fine vertical distribution of phytoplankton groups within the deep chlorophyll maximum (DCM) was studied in the NE Atlantic during summer stratification. A simple but unconventional sampling strategy allowed examining the vertical structure with ca. 2 m resolution. The distribution of Prochlorococcus, Synechococcus, chlorophytes, pelagophytes, small prymnesiophytes, coccolithophores, diatoms, and dinoflagellates was investigated with a combination of pigment-markers, flow cytometry and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer. The more symmetrical distribution of chlorophyll than cells around the DCM peak was due to the increase of pigment per cell with depth. We found a vertical alignment of phytoplankton groups within the DCM layer indicating preferences for different ecological niches in a layer with strong gradients of light and nutrients. Prochlorococcus occupied the shallowest and diatoms the deepest layers. Dinoflagellates, Synechococcus and small prymnesiophytes preferred shallow DCM layers, and coccolithophores, chlorophytes and pelagophytes showed a preference for deep layers. Cell size within groups changed with depth in a pattern related to their mean size: the cell volume of the smallest group increased the most with depth while the cell volume of the largest group decreased the most. The vertical alignment of phytoplankton groups confirms that the DCM is not a homogeneous entity and indicates groups’ preferences for different ecological niches within this layer.
Noise and physical limits to maximum resolution of PET images
Energy Technology Data Exchange (ETDEWEB)
Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2007-10-01
In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.
Noise and physical limits to maximum resolution of PET images
International Nuclear Information System (INIS)
Herraiz, J.L.; Espana, S.; Vicente, E.; Vaquero, J.J.; Desco, M.; Udias, J.M.
2007-01-01
In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners
Maximum and minimum entropy states yielding local continuity bounds
Hanson, Eric P.; Datta, Nilanjana
2018-04-01
Given an arbitrary quantum state (σ), we obtain an explicit construction of a state ρɛ * ( σ ) [respectively, ρ * , ɛ ( σ ) ] which has the maximum (respectively, minimum) entropy among all states which lie in a specified neighborhood (ɛ-ball) of σ. Computing the entropy of these states leads to a local strengthening of the continuity bound of the von Neumann entropy, i.e., the Audenaert-Fannes inequality. Our bound is local in the sense that it depends on the spectrum of σ. The states ρɛ * ( σ ) and ρ * , ɛ (σ) depend only on the geometry of the ɛ-ball and are in fact optimizers for a larger class of entropies. These include the Rényi entropy and the minimum- and maximum-entropies, providing explicit formulas for certain smoothed quantities. This allows us to obtain local continuity bounds for these quantities as well. In obtaining this bound, we first derive a more general result which may be of independent interest, namely, a necessary and sufficient condition under which a state maximizes a concave and Gâteaux-differentiable function in an ɛ-ball around a given state σ. Examples of such a function include the von Neumann entropy and the conditional entropy of bipartite states. Our proofs employ tools from the theory of convex optimization under non-differentiable constraints, in particular Fermat's rule, and majorization theory.
Hit size effectiveness in relation to the microdosimetric site size
International Nuclear Information System (INIS)
Varma, M.N.; Wuu, C.S.; Zaider, M.
1994-01-01
This paper examines the effect of site size (that is, the diameter of the microdosimetric volume) on the hit size effectiveness function (HSEF), q(y), for several endpoints relevant in radiation protection. A Bayesian and maximum entropy approach is used to solve the integral equations that determine, given microdosimetric spectra and measured initial slopes, the function q(y). All microdosimetric spectra have been calculated de novo. The somewhat surprising conclusion of this analysis is that site size plays only a minor role in selecting the hit size effectiveness function q(y). It thus appears that practical means (e.g. conventional proportional counters) are already at hand to actually implement the HSEF as a radiation protection tool. (Author)
Catastrophic Disruption Threshold and Maximum Deflection from Kinetic Impact
Cheng, A. F.
2017-12-01
The use of a kinetic impactor to deflect an asteroid on a collision course with Earth was described in the NASA Near-Earth Object Survey and Deflection Analysis of Alternatives (2007) as the most mature approach for asteroid deflection and mitigation. The NASA DART mission will demonstrate asteroid deflection by kinetic impact at the Potentially Hazardous Asteroid 65803 Didymos in October, 2022. The kinetic impactor approach is considered to be applicable with warning times of 10 years or more and with hazardous asteroid diameters of 400 m or less. In principle, a larger kinetic impactor bringing greater kinetic energy could cause a larger deflection, but input of excessive kinetic energy will cause catastrophic disruption of the target, leaving possibly large fragments still on collision course with Earth. Thus the catastrophic disruption threshold limits the maximum deflection from a kinetic impactor. An often-cited rule of thumb states that the maximum deflection is 0.1 times the escape velocity before the target will be disrupted. It turns out this rule of thumb does not work well. A comparison to numerical simulation results shows that a similar rule applies in the gravity limit, for large targets more than 300 m, where the maximum deflection is roughly the escape velocity at momentum enhancement factor β=2. In the gravity limit, the rule of thumb corresponds to pure momentum coupling (μ=1/3), but simulations find a slightly different scaling μ=0.43. In the smaller target size range that kinetic impactors would apply to, the catastrophic disruption limit is strength-controlled. A DART-like impactor won't disrupt any target asteroid down to significantly smaller size than the 50 m below which a hazardous object would not penetrate the atmosphere in any case unless it is unusually strong.
Maximum margin semi-supervised learning with irrelevant data.
Yang, Haiqin; Huang, Kaizhu; King, Irwin; Lyu, Michael R
2015-10-01
Semi-supervised learning (SSL) is a typical learning paradigms training a model from both labeled and unlabeled data. The traditional SSL models usually assume unlabeled data are relevant to the labeled data, i.e., following the same distributions of the targeted labeled data. In this paper, we address a different, yet formidable scenario in semi-supervised classification, where the unlabeled data may contain irrelevant data to the labeled data. To tackle this problem, we develop a maximum margin model, named tri-class support vector machine (3C-SVM), to utilize the available training data, while seeking a hyperplane for separating the targeted data well. Our 3C-SVM exhibits several characteristics and advantages. First, it does not need any prior knowledge and explicit assumption on the data relatedness. On the contrary, it can relieve the effect of irrelevant unlabeled data based on the logistic principle and maximum entropy principle. That is, 3C-SVM approaches an ideal classifier. This classifier relies heavily on labeled data and is confident on the relevant data lying far away from the decision hyperplane, while maximally ignoring the irrelevant data, which are hardly distinguished. Second, theoretical analysis is provided to prove that in what condition, the irrelevant data can help to seek the hyperplane. Third, 3C-SVM is a generalized model that unifies several popular maximum margin models, including standard SVMs, Semi-supervised SVMs (S(3)VMs), and SVMs learned from the universum (U-SVMs) as its special cases. More importantly, we deploy a concave-convex produce to solve the proposed 3C-SVM, transforming the original mixed integer programming, to a semi-definite programming relaxation, and finally to a sequence of quadratic programming subproblems, which yields the same worst case time complexity as that of S(3)VMs. Finally, we demonstrate the effectiveness and efficiency of our proposed 3C-SVM through systematical experimental comparisons. Copyright
International Nuclear Information System (INIS)
1991-01-01
The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de
2010-07-27
...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...
Kelkboom, E.J.C.; Breebaart, J.; Buhan, I.R.; Veldhuis, Raymond N.J.
Template protection techniques are used within biometric systems in order to protect the stored biometric template against privacy and security threats. A great portion of template protection techniques are based on extracting a key from, or binding a key to the binary vector derived from the
Kelkboom, E.J.C.; Breebaart, Jeroen; Buhan, I.R.; Veldhuis, Raymond N.J.; Vijaya Kumar, B.V.K.; Prabhakar, Salil; Ross, Arun A.
2010-01-01
Template protection techniques are used within biometric systems in order to protect the stored biometric template against privacy and security threats. A great portion of template protection techniques are based on extracting a key from or binding a key to a biometric sample. The achieved
Zipf's law, power laws and maximum entropy
International Nuclear Information System (INIS)
Visser, Matt
2013-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)
Maximum likelihood estimation for integrated diffusion processes
DEFF Research Database (Denmark)
Baltazar-Larios, Fernando; Sørensen, Michael
We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum parsimony on subsets of taxa.
Fischer, Mareike; Thatte, Bhalchandra D
2009-09-21
In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.
Distributed optimization of multi-class SVMs.
Directory of Open Access Journals (Sweden)
Maximilian Alber
Full Text Available Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way. Given enough computational resources, one-vs.-rest SVMs can thus be trained on data involving a large number of classes. The same cannot be stated, however, for the so-called all-in-one SVMs, which require solving a quadratic program of size quadratically in the number of classes. We develop distributed algorithms for two all-in-one SVM formulations (Lee et al. and Weston and Watkins that parallelize the computation evenly over the number of classes. This allows us to compare these models to one-vs.-rest SVMs on unprecedented scale. The results indicate superior accuracy on text classification data.
Maximum entropy analysis of liquid diffraction data
International Nuclear Information System (INIS)
Root, J.H.; Egelstaff, P.A.; Nickel, B.G.
1986-01-01
A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)
A Maximum Resonant Set of Polyomino Graphs
Directory of Open Access Journals (Sweden)
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Automatic maximum entropy spectral reconstruction in NMR
International Nuclear Information System (INIS)
Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.
2007-01-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system
maximum neutron flux at thermal nuclear reactors
International Nuclear Information System (INIS)
Strugar, P.
1968-10-01
Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr
Social Class Dialogues and the Fostering of Class Consciousness
Madden, Meredith
2015-01-01
How do critical pedagogies promote undergraduate students' awareness of social class, social class identity, and social class inequalities in education? How do undergraduate students experience class consciousness-raising in the intergroup dialogue classroom? This qualitative study explores undergraduate students' class consciousness-raising in an…
Directory of Open Access Journals (Sweden)
Sergievskiy Maxim
2018-01-01
Full Text Available Most of object-oriented development technologies rely on the use of the universal modeling language UML; class diagrams play a very important role in the design process play, used to build a software system model. Modern CASE tools, which are the basic tools for object-oriented development, can’t be used to optimize UML diagrams. In this manuscript we will explain how, based on the use of design patterns and anti-patterns, class diagrams could be verified and optimized. Certain transformations can be carried out automatically; in other cases, potential inefficiencies will be indicated and recommendations given. This study also discusses additional CASE tools for validating and optimizing of UML class diagrams. For this purpose, a plugin has been developed that analyzes an XMI file containing a description of class diagrams.
... Introduction Types of Heart Failure Classes of Heart Failure Heart Failure in Children Advanced Heart Failure • Causes and ... and Advanced HF • Tools and Resources • Personal Stories Heart Failure Questions to Ask Your Doctor Use these questions ...
Generalized Fourier transforms classes
DEFF Research Database (Denmark)
Berntsen, Svend; Møller, Steen
2002-01-01
The Fourier class of integral transforms with kernels $B(\\omega r)$ has by definition inverse transforms with kernel $B(-\\omega r)$. The space of such transforms is explicitly constructed. A slightly more general class of generalized Fourier transforms are introduced. From the general theory foll...... follows that integral transform with kernels which are products of a Bessel and a Hankel function or which is of a certain general hypergeometric type have inverse transforms of the same structure....
Fitness Club
2015-01-01
Four classes of one hour each are held on Tuesdays. RDV barracks parking at Entrance A, 10 minutes before class time. Spring Course 2015: 05.05/12.05/19.05/26.05 Prices 40 CHF per session + 10 CHF club membership 5 CHF/hour pole rental Check out our schedule and enroll at: https://espace.cern.ch/club-fitness/Lists/Nordic%20Walking/NewForm.aspx? Hope to see you among us! fitness.club@cern.ch
Roza, Marguerite; Ouijdani, Monica
2012-01-01
Two seemingly different threads are in play on the issue of class size. The first is manifested in media reports that tell readers that class sizes are rising to concerning levels. The second thread appears in the work of some researchers and education leaders and suggests that repurposing class-size reduction funds to pay for other reforms may…
Rao, M R P; Bajaj, A
2014-12-01
Telmisartan, an orally active nonpeptide angiotensin II receptor antagonist is a BCS Class II drug having aqueous solubility of 9.9 µg/ml and hence oral bioavailability of 40%. The present study involved preparation of nanosuspensions by evaporative antisolvent precipitation technique to improve the saturation solubility and dissolution rate of telmisartan. Various stabilizers such as TPGS, PVPK 30, PEG 6000 were investigated of which TPGS was found to provide maximum decrease in particle size and accord greater stability to the nanosuspensions. Box-Behnken design was used to investigate the effect of independent variables like stabilizer concentration, time and speed of stirring on particle size of nanosuspensions. Pharmacodynamic studies using Goldblatt technique were undertaken to evaluate the effect of nano-sizing on the hypotensive effect of the drug. Concentration of TPGS and speed of rotation were found to play an important role in particle size of the nanosuspensions whereas time of stirring displayed an exponential relationship with particle size. Freeze dried nanocrystals obtained from nanosuspension of least particle size were found to have increased saturation solubility of telmisartan in different dissolution media. The reconstituted nanosuspension was found to reduce both systolic and diastolic blood pressure without affecting pulse pressure and heart rate. Statistical tools can be used to identify key process and formulation parameters which play a significant role in controlling the particle size in nanosuspensions. © Georg Thieme Verlag KG Stuttgart · New York.
Cell size, genome size and the dominance of Angiosperms
Simonin, K. A.; Roddy, A. B.
2016-12-01
Angiosperms are capable of maintaining the highest rates of photosynthetic gas exchange of all land plants. High rates of photosynthesis depends mechanistically both on efficiently transporting water to the sites of evaporation in the leaf and on regulating the loss of that water to the atmosphere as CO2 diffuses into the leaf. Angiosperm leaves are unique in their ability to sustain high fluxes of liquid and vapor phase water transport due to high vein densities and numerous, small stomata. Despite the ubiquity of studies characterizing the anatomical and physiological adaptations that enable angiosperms to maintain high rates of photosynthesis, the underlying mechanism explaining why they have been able to develop such high leaf vein densities, and such small and abundant stomata, is still incomplete. Here we ask whether the scaling of genome size and cell size places a fundamental constraint on the photosynthetic metabolism of land plants, and whether genome downsizing among the angiosperms directly contributed to their greater potential and realized primary productivity relative to the other major groups of terrestrial plants. Using previously published data we show that a single relationship can predict guard cell size from genome size across the major groups of terrestrial land plants (e.g. angiosperms, conifers, cycads and ferns). Similarly, a strong positive correlation exists between genome size and both stomatal density and vein density that together ultimately constrains maximum potential (gs, max) and operational stomatal conductance (gs, op). Further the difference in the slopes describing the covariation between genome size and both gs, max and gs, op suggests that genome downsizing brings gs, op closer to gs, max. Taken together the data presented here suggests that the smaller genomes of angiosperms allow their final cell sizes to vary more widely and respond more directly to environmental conditions and in doing so bring operational photosynthetic
Finite mixture model: A maximum likelihood estimation approach on time series data
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Body Size Distribution of the Dinosaurs
O?Gorman, Eoin J.; Hone, David W. E.
2012-01-01
The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutiona...
Maximum entropy decomposition of quadrupole mass spectra
International Nuclear Information System (INIS)
Toussaint, U. von; Dose, V.; Golan, A.
2004-01-01
We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast
Maximum power operation of interacting molecular motors
DEFF Research Database (Denmark)
Golubeva, Natalia; Imparato, Alberto
2013-01-01
, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...
Maximum entropy method in momentum density reconstruction
International Nuclear Information System (INIS)
Dobrzynski, L.; Holas, A.
1997-01-01
The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig
On the maximum drawdown during speculative bubbles
Rotundo, Giulia; Navarra, Mauro
2007-08-01
A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.
Multi-Channel Maximum Likelihood Pitch Estimation
DEFF Research Database (Denmark)
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Conductivity maximum in a charged colloidal suspension
Energy Technology Data Exchange (ETDEWEB)
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Dynamical maximum entropy approach to flocking.
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Multiperiod Maximum Loss is time unit invariant.
Kovacevic, Raimund M; Breuer, Thomas
2016-01-01
Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Improved Maximum Parsimony Models for Phylogenetic Networks.
Van Iersel, Leo; Jones, Mark; Scornavacca, Celine
2018-05-01
Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.
Ancestral sequence reconstruction with Maximum Parsimony
Herbst, Lina; Fischer, Mareike
2017-01-01
One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...
International Nuclear Information System (INIS)
Donner, E.B.; Low, J.M.; Lux, C.R.
1992-01-01
DOE Order 6430.1A, General Design Criteria (GDC), requires that DOE facilities be evaluated with respect to ''safety class items.'' Although the GDC defines safety class items, it does not provide a methodology for selecting safety class items. The methodology described in this paper was developed to assure that Safety Class Items at the Savannah River Site (SRS) are selected in a consistent and technically defensible manner. Safety class items are those in the highest of four categories determined to be of special importance to nuclear safety and, merit appropriately higher-quality design, fabrication, and industrial test standards and codes. The identification of safety class items is approached using a cascading strategy that begins at the 'safety function' level (i.e., a cooling function, ventilation function, etc.) and proceeds down to the system, component, or structure level. Thus, the items that are required to support a safety function are SCls. The basic steps in this procedure apply to the determination of SCls for both new project activities, and for operating facilities. The GDC lists six characteristics of SCls to be considered as a starting point for safety item classification. They are as follows: 1. Those items whose failure would produce exposure consequences that would exceed the guidelines in Section 1300-1.4, ''Guidance on Limiting Exposure of the Public,'' at the site boundary or nearest point of public access 2. Those items required to maintain operating parameters within the safety limits specified in the Operational Safety Requirements during normal operations and anticipated operational occurrences. 3. Those items required for nuclear criticality safety. 4. Those items required to monitor the release of radioactive material to the environment during and after a Design Basis Accident. Those items required to achieve, and maintain the facility in a safe shutdown condition 6. Those items that control Safety Class Item listed above
Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong
2013-01-01
In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...
Objective Bayesianism and the Maximum Entropy Principle
Directory of Open Access Journals (Sweden)
Jon Williamson
2013-09-01
Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.
Shrinking Middle Class and Changing Income Distribution of Korea: 1995-2005
Joon-Woo Nahm
2008-01-01
This paper investigates the shrinking middle class hypothesis and reveals more details about recent trends in income distribution of Korea from 1995 to 2005. We find that the consensus view of a declining middle class is correct and the decline in the middle class splited equally into the lower class and the upper class in Korea. Furthermore, while the size and income share of the middle class declined, the share of the upper class increased rapidly and the share of the lower class remained s...
CLASS: Core Library for Advanced Scenario Simulations
International Nuclear Information System (INIS)
Mouginot, B.; Thiolliere, N.
2015-01-01
The nuclear reactor simulation community has to perform complex electronuclear scenario simulations. To avoid constraints coming from the existing powerful scenario software such as COSI, VISION or FAMILY, the open source Core Library for Advanced Scenario Simulation (CLASS) has been developed. The main asset of CLASS is its ability to include any type of reactor, whether the system is innovative or standard. A reactor is fully described by its evolution database which should contain a set of different validated fuel compositions in order to simulate transitional scenarios. CLASS aims to be a useful tool to study scenarios involving Generation-IV reactors as well as innovative fuel cycles, like the thorium cycle. In addition to all standard key objects required by an electronuclear scenario simulation (the isotopic vector, the reactor, the fuel storage and the fabrication units), CLASS also integrates two new specific modules: fresh fuel evolution and recycled fuel fabrication. The first module, dealing with fresh fuel evolution, is implemented in CLASS by solving Bateman equations built from a database induced cross-sections. The second module, which incorporates the fabrication of recycled fuel to CLASS, can be defined by user priorities and/or algorithms. By default, it uses a linear Pu equivalent-method, which allows predicting, from the isotopic composition, the maximum burn-up accessible for a set type of fuel. This paper presents the basis of the CLASS scenario, the fuel method applied to a MOX fuel and an evolution module benchmark based on the French electronuclear fleet from 1977 to 2012. Results of the CLASS calculation were compared with the inventory made and published by the ANDRA organisation in 2012. For UOX used fuels, the ANDRA reported 12006 tonnes of heavy metal in stock, including cooling, versus 18500 tonnes of heavy metal predicted by CLASS. The large difference is easily explained by the presence of 56 tonnes of plutonium already separated
A maximum power point tracking algorithm for buoy-rope-drum wave energy converters
Wang, J. Q.; Zhang, X. C.; Zhou, Y.; Cui, Z. C.; Zhu, L. S.
2016-08-01
The maximum power point tracking control is the key link to improve the energy conversion efficiency of wave energy converters (WEC). This paper presents a novel variable step size Perturb and Observe maximum power point tracking algorithm with a power classification standard for control of a buoy-rope-drum WEC. The algorithm and simulation model of the buoy-rope-drum WEC are presented in details, as well as simulation experiment results. The results show that the algorithm tracks the maximum power point of the WEC fast and accurately.
Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains
Cofré, Rodrigo; Maldonado, Cesar
2018-01-01
We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.
Directory of Open Access Journals (Sweden)
Nagy Imola Katalin
2015-12-01
Full Text Available The problem of translation in foreign language classes cannot be dealt with unless we attempt to make an overview of what translation meant for language teaching in different periods of language pedagogy. From the translation-oriented grammar-translation method through the complete ban on translation and mother tongue during the times of the audio-lingual approaches, we have come today to reconsider the role and status of translation in ESL classes. This article attempts to advocate for translation as a useful ESL class activity, which can completely fulfil the requirements of communicativeness. We also attempt to identify some activities and games, which rely on translation in some books published in the 1990s and the 2000s.
Linear Time Local Approximation Algorithm for Maximum Stable Marriage
Directory of Open Access Journals (Sweden)
Zoltán Király
2013-08-01
Full Text Available We consider a two-sided market under incomplete preference lists with ties, where the goal is to find a maximum size stable matching. The problem is APX-hard, and a 3/2-approximation was given by McDermid [1]. This algorithm has a non-linear running time, and, more importantly needs global knowledge of all preference lists. We present a very natural, economically reasonable, local, linear time algorithm with the same ratio, using some ideas of Paluch [2]. In this algorithm every person make decisions using only their own list, and some information asked from members of these lists (as in the case of the famous algorithm of Gale and Shapley. Some consequences to the Hospitals/Residents problem are also discussed.
Maximum Recoverable Gas from Hydrate Bearing Sediments by Depressurization
Terzariol, Marco
2017-11-13
The estimation of gas production rates from hydrate bearing sediments requires complex numerical simulations. This manuscript presents a set of simple and robust analytical solutions to estimate the maximum depressurization-driven recoverable gas. These limiting-equilibrium solutions are established when the dissociation front reaches steady state conditions and ceases to expand further. Analytical solutions show the relevance of (1) relative permeabilities between the hydrate free sediment, the hydrate bearing sediment, and the aquitard layers, and (2) the extent of depressurization in terms of the fluid pressures at the well, at the phase boundary, and in the far field. Close form solutions for the size of the produced zone allow for expeditious financial analyses; results highlight the need for innovative production strategies in order to make hydrate accumulations an economically-viable energy resource. Horizontal directional drilling and multi-wellpoint seafloor dewatering installations may lead to advantageous production strategies in shallow seafloor reservoirs.
Hydraulic Limits on Maximum Plant Transpiration
Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.
2011-12-01
Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water
Debye classes in A15 compounds
International Nuclear Information System (INIS)
Staudenmann, J.; DeFacio, B.; Testardi, L.R.; Werner, S.A.; Fluekiger, R.; Muller, J.
1981-01-01
The comparison between electron charge-density distribution of V 3 Si, Cr 3 Si, and V 3 Ge at room temperature leads us to study the Debye temperatures at 0 0 K THETA 0 from specific-heat measurements for over 100 A15 compounds. A phenomenological THETA 0 (M), M the molecular mass, is obtained from the static scaling relation THETA 0 (M) = aM/sup b/ and this organizes all of the data into five Debye classes: V(V 3 Si), V-G, G(V 3 Ge), G-C, and C(Cr 3 Si). In contrast, the Debye temperature THETA 0 (V), with V as the unit-cell volume does not relate alloys as THETA 0 (M) does, with the exception of the C class. This latter case leads to the surprising result MproportionalV/sup approximately1/3/ and to a Grueneisen constant of 1.6 +- 0.1 for all compounds of this class. In the V class where V 3 Si and Nb 3 Sn are found, THETA 0 (V) labels these two alloys differently, as does their martensitic c/a ratios. With T-bar/sub c/ denoting the average superconducting transition temperature within a Debye class, interesting correlations are shown. One is the maximum of T-bar/sub c/ which exists in the V class where the strongest anharmonicity occurs. Another is the case of compounds formed only by transition elements up to and including Au. This interesting case shows that approx.3.2< T-bar/sub c/< approx.5.0 K in all of the five classes and that there is no correlation between T/sub c/ and the thermal properties. The implications of these observations for creating better models for the A15 compounds are briefly discussed
Analogue of Pontryagin's maximum principle for multiple integrals minimization problems
Mikhail, Zelikin
2016-01-01
The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Dr. K. Sravana Kumar
2016-01-01
The middle class is placed between labour and capital. It neither directly awns the means of production that pumps out the surplus generated by wage labour power, nor does it, by its own labour, produce the surplus which has use and exchange value. Broadly speaking, this class consists of the petty bourgeoisie and the white-collar workers. The former are either self-employed or involved in the distribution of commodities and the latter are non-manual office workers, supervisors and profession...
DEFF Research Database (Denmark)
Elling, Rasmus Christian; Rezakhani, Khodadad
2016-01-01
Persian, like any other language, is laced with references to class, both blatant and subtle. With idioms and metaphors, Iranians can identify and situate others, and thus themselves, within hierarchies of social status and privilege, both real and imagined. Some class-related terms can be traced...... back to medieval times, whereas others are of modern vintage, the linguistic legacy of television shows, pop songs, social media memes or street vernacular. Every day, it seems, an infectious set of phrases appears that make yesterday’s seem embarrassingly antiquated....
Measuring fire size in tunnels
International Nuclear Information System (INIS)
Guo, Xiaoping; Zhang, Qihui
2013-01-01
A new measure of fire size Q′ has been introduced in longitudinally ventilated tunnel as the ratio of flame height to the height of tunnel. The analysis in this article has shown that Q′ controls both the critical velocity and the maximum ceiling temperature in the tunnel. Before the fire flame reaches tunnel ceiling (Q′ 1.0), Fr approaches a constant value. This is also a well-known phenomenon in large tunnel fires. Tunnel ceiling temperature shows the opposite trend. Before the fire flame reaches the ceiling, it increases very slowly with the fire size. Once the flame has hit the ceiling of tunnel, temperature rises rapidly with Q′. The good agreement between the current prediction and three different sets of experimental data has demonstrated that the theory has correctly modelled the relation among the heat release rate of fire, ventilation flow and the height of tunnel. From design point of view, the theoretical maximum of critical velocity for a given tunnel can help to prevent oversized ventilation system. -- Highlights: • Fire sizing is an important safety measure in tunnel design. • New measure of fire size a function of HRR of fire, tunnel height and ventilation. • The measure can identify large and small fires. • The characteristics of different fire are consistent with observation in real fires
Alharbi, Abeer A.; Stoet, Gijsbert
2017-01-01
There is no consensus among academics about whether children benefit from smaller classes. We analysed the data from the 2012 Programme for International Student Assessment (PISA) to test if smaller classes lead to higher performance. Advantages of using this data set are not only its size (478,120 15-year old students in 63 nations) and…
Design of Simplified Maximum-Likelihood Receivers for Multiuser CPM Systems
Directory of Open Access Journals (Sweden)
Li Bing
2014-01-01
Full Text Available A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases reduced complexity and marginal performance degradation.
Design of simplified maximum-likelihood receivers for multiuser CPM systems.
Bing, Li; Bai, Baoming
2014-01-01
A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.
Maximum power analysis of photovoltaic module in Ramadi city
Energy Technology Data Exchange (ETDEWEB)
Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)
2013-07-01
Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.
Radiation pressure acceleration: The factors limiting maximum attainable ion energy
Energy Technology Data Exchange (ETDEWEB)
Bulanov, S. S.; Esarey, E.; Schroeder, C. B. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Bulanov, S. V. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); A. M. Prokhorov Institute of General Physics RAS, Moscow 119991 (Russian Federation); Esirkepov, T. Zh.; Kando, M. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); Pegoraro, F. [Physics Department, University of Pisa and Istituto Nazionale di Ottica, CNR, Pisa 56127 (Italy); Leemans, W. P. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Physics Department, University of California, Berkeley, California 94720 (United States)
2016-05-15
Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it transparent for radiation and effectively terminating the acceleration. The off-normal incidence of the laser on the target, due either to the experimental setup, or to the deformation of the target, will also lead to establishing a limit on maximum ion energy.
METHOD FOR DETERMINING THE MAXIMUM ARRANGEMENT FACTOR OF FOOTWEAR PARTS
Directory of Open Access Journals (Sweden)
DRIŞCU Mariana
2014-05-01
Full Text Available By classic methodology, designing footwear is a very complex and laborious activity. That is because classic methodology requires many graphic executions using manual means, which consume a lot of the producer’s time. Moreover, the results of this classical methodology may contain many inaccuracies with the most unpleasant consequences for the footwear producer. Thus, the costumer that buys a footwear product by taking into consideration the characteristics written on the product (size, width can notice after a period that the product has flaws because of the inadequate design. In order to avoid this kind of situations, the strictest scientific criteria must be followed when one designs a footwear product. The decisive step in this way has been made some time ago, when, as a result of powerful technical development and massive implementation of electronical calculus systems and informatics, This paper presents a product software for determining all possible arrangements of a footwear product’s reference points, in order to automatically acquire the maximum arrangement factor. The user multiplies the pattern in order to find the economic arrangement for the reference points. In this purpose, the user must probe few arrangement variants, in the translation and rotate-translation system. The same process is used in establishing the arrangement factor for the two points of reference of the designed footwear product. After probing several variants of arrangement in the translation and rotation and translation systems, the maximum arrangement factors are chosen. This allows the user to estimate the material wastes.
Constraints on pulsar masses from the maximum observed glitch
Pizzochero, P. M.; Antonelli, M.; Haskell, B.; Seveso, S.
2017-07-01
Neutron stars are unique cosmic laboratories in which fundamental physics can be probed in extreme conditions not accessible to terrestrial experiments. In particular, the precise timing of rotating magnetized neutron stars (pulsars) reveals sudden jumps in rotational frequency in these otherwise steadily spinning-down objects. These 'glitches' are thought to be due to the presence of a superfluid component in the star, and offer a unique glimpse into the interior physics of neutron stars. In this paper we propose an innovative method to constrain the mass of glitching pulsars, using observations of the maximum glitch observed in a star, together with state-of-the-art microphysical models of the pinning interaction between superfluid vortices and ions in the crust. We study the properties of a physically consistent angular momentum reservoir of pinned vorticity, and we find a general inverse relation between the size of the maximum glitch and the pulsar mass. We are then able to estimate the mass of all the observed glitchers that have displayed at least two large events. Our procedure will allow current and future observations of glitching pulsars to constrain not only the physics of glitch models but also the superfluid properties of dense hadronic matter in neutron star interiors.
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
Maximum Profit Configurations of Commercial Engines
Directory of Open Access Journals (Sweden)
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
The worst case complexity of maximum parsimony.
Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal
2014-11-01
One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.
Modelling maximum likelihood estimation of availability
International Nuclear Information System (INIS)
Waller, R.A.; Tietjen, G.L.; Rock, G.W.
1975-01-01
Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)
International Nuclear Information System (INIS)
Delorme, J.
1978-01-01
The definition and general properties of weak second class currents are recalled and various detection possibilities briefly reviewed. It is shown that the existing data on nuclear beta decay can be consistently analysed in terms of a phenomenological model. Their implication on the fundamental structure of weak interactions is discussed [fr
Mitchell, Rosalita
1998-01-01
School communities are challenged to find ways to identify good teachers and give other teachers a chance to learn from them. The New Mexico World Class Teacher Project is encouraging teachers to pursue certification by the National Board for Professional Teaching Standards. This process sharpens teachers' student assessment skills and encourages…
Scheduled webinars can help you better manage EPA web content. Class topics include Drupal basics, creating different types of pages in the WebCMS such as document pages and forms, using Google Analytics, and best practices for metadata and accessibility.
DEFF Research Database (Denmark)
Werlauff, Erik
2009-01-01
The article deals with the relatively new Danish Act on Class Action (Danish: gruppesøgsmål) which was suggested by The Permanent Council on Civil procedure (Retsplejerådet) of which the article's author is a member. The operability of the new provisions is illustrated through some wellknown Danish...
McKinnon, Rachel
2012-01-01
This article shares how the author explained her trans status to her students. Everyone has been extremely supportive of her decision to come out in class and to completely mask the male secondary-sex characteristics, especially in the workplace. The department chair and the faculty in general have been willing to do whatever they can to assist…
Directory of Open Access Journals (Sweden)
Pateşan Marioara
2017-12-01
Full Text Available The scores obtained by the military students are very important as a lot of opportunities depend on them: the choice of the branch, selection for different in and off-campus activities, the appointment to the workplace and so on. A qualifier, regardless of its form of effective expression, can make a difference in a given context of issuing a value judgment, in relation to the student's performance assessment. In our research we tried to find out what motives students, what determines them to get actively involved in the tasks they are given and the ways we can improve their participation in classes and assignments. In order to have an educated generation we need to have not only well prepared teachers but ones that are open-minded, flexible and in pace with the methodological novelties that can improve the teaching learning process in class. Along the years we have noticed that in classes where students constituted a cohesive group with an increasing degree of interaction between members, the results were better than in a group that did not appreciate team-work. In this article we want to highlight the fact that a teacher can bring to class the appropriate methods and procedures can contribute decisively to the strengthening of the group cohesion and high scores.
Random generation of bubble sizes on the heated wall during subcooled boiling
International Nuclear Information System (INIS)
Koncar, B.; Mavko, B.
2003-01-01
In subcooled flow boiling, a locally averaged bubble diameter significantly varies in the transverse direction to the flow. From the experimental data of Bartel, a bent crosssectional profile of local bubble diameter with the maximum value shifted away from the heated wall may be observed. In the present paper, the increasing part of the profile (near the heated wall) is explained by a random generation of bubble sizes on the heated wall. The hypothesis was supported by a statistical analysis of different CFD simulations, varying by the size of the generated bubble (normal distribution) and the number of generated bubbles per unit surface. Local averaging of calculated void fraction distributions over different bubble classes was performed. The increasing curve of the locally averaged bubble diameter in the near-wall region was successfully predicted. (author)
Directory of Open Access Journals (Sweden)
Geoff Eley
2013-12-01
Full Text Available No início da década de 1980, a política centrada em classes da tradição socialista estava em crise, e comentadores importantes adotaram tons apocalípticos. No final da década, a esquerda permanecia profundamente dividida entre os advogados da mudança e os defensores da fé. Em meados dos anos 1990, os primeiros tinham, de modo geral, ganhado a batalha. O artigo busca apresentar essa mudança contemporânea não como a 'morte da classe', mas como o desaparecimento de um tipo particular de sociedade de classes, marcado pelo processo de formação da classe trabalhadora entre os anos 1880 e 1940 e pelo alinhamento político daí resultante, atingindo seu apogeu na construção social-democrata do acordo do pós-guerra. Quando mudanças de longo prazo na economia se combinaram com o ataque ao keynesianismo na política de recessão a partir de meados da década de 1970, a unidade da classe trabalhadora deixou de estar disponível da forma antiga e bastante utilizada, como o terreno natural da política de esquerda. Enquanto uma coletividade dominante da classe trabalhadora entrou em declínio, outra se corporificou de modo lento e desigual para tomar o lugar daquela. Mas a unidade operacional dessa nova agregação da classe trabalhadora ainda está, em grande parte, em formação. Para recuperar a eficácia política da tradição socialista, alguma nova visão de agência política coletiva será necessária, uma visão imaginativamente ajustada às condições emergentes da produção e acumulação capitalista no início do século XXI.
A maximum power point tracking for photovoltaic-SPE system using a maximum current controller
Energy Technology Data Exchange (ETDEWEB)
Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)
2003-02-01
Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)
Particle size distribution instrument. Topical report 13
Energy Technology Data Exchange (ETDEWEB)
Okhuysen, W.; Gassaway, J.D.
1995-04-01
The development of an instrument to measure the concentration of particles in gas is described in this report. An in situ instrument was designed and constructed which sizes individual particles and counts the number of occurrences for several size classes. Although this instrument was designed to detect the size distribution of slag and seed particles generated at an experimental coal-fired magnetohydrodynamic power facility, it can be used as a nonintrusive diagnostic tool for other hostile industrial processes involving the formation and growth of particulates. Two of the techniques developed are extensions of the widely used crossed beam velocimeter, providing simultaneous measurement of the size distribution and velocity of articles.
Reliability of one-repetition maximum performance in people with chronic heart failure.
Ellis, Rachel; Holland, Anne E; Dodd, Karen; Shields, Nora
2018-02-24
Evaluate intra-rater and inter-rater reliability of the one-repetition maximum strength test in people with chronic heart failure. Intra-rater and inter-rater reliability study. A public tertiary hospital in northern metropolitan Melbourne. Twenty-four participants (nine female, mean age 71.8 ± 13.1 years) with mild to moderate heart failure of any aetiology. Lower limb strength was assessed by determining the maximum weight that could be lifted using a leg press. Intra-rater reliability was tested by one assessor on two separate occasions . Inter-rater reliability was tested by two assessors in random order. Intra-class correlation coefficients and 95% confidence intervals were calculated. Bland and Altman analyses were also conducted, including calculation of mean differences between measures ([Formula: see text]) and limits of agreement . Ten intra-rater and 21 inter-rater assessments were completed. Excellent intra-rater (intra-class correlation coefficient 2,1 0.96) and inter-rater (intra-class correlation coefficient 2,1 0.93) reliability was found. Intra-rater assessment showed less variability (mean difference 4.5 kg, limits of agreement -8.11 to 17.11 kg) than inter-rater agreement (mean difference -3.81 kg, limits of agreement -23.39 to 15.77 kg). One-repetition maximum determined using a leg press is a reliable measure in people with heart failure. Given its smaller limits of agreement, intra-rater testing is recommended. Implications for Rehabilitation Using a leg press to determine a one-repetition maximum we were able to demonstrate excellent inter-rater and intra-rater reliability using an intra-class correlation coefficient. The Bland and Altman levels of agreement were wide for inter-rater reliability and so we recommend using one assessor if measuring change in strength within an individual over time.
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
Maximum likelihood sequence estimation for optical complex direct modulation.
Che, Di; Yuan, Feng; Shieh, William
2017-04-17
Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.
Maximum mass of magnetic white dwarfs
International Nuclear Information System (INIS)
Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez
2015-01-01
We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)
TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS
Energy Technology Data Exchange (ETDEWEB)
Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M
2007-11-12
Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.
Maximum power flux of auroral kilometric radiation
International Nuclear Information System (INIS)
Benson, R.F.; Fainberg, J.
1991-01-01
The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3
Maximum likelihood window for time delay estimation
International Nuclear Information System (INIS)
Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup
2004-01-01
Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.
Ancestral Sequence Reconstruction with Maximum Parsimony.
Herbst, Lina; Fischer, Mareike
2017-12-01
One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...
Half-width at half-maximum, full-width at half-maximum analysis
Indian Academy of Sciences (India)
addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex; Taylor, Andy
2017-06-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.
DEFF Research Database (Denmark)
• First major publication on the phenomenon • Offers cross-linguistic, descriptive, and diverse theoretical approaches • Includes analysis of data from different language families and from lesser studied languages This book is the first major cross-linguistic study of 'flexible words', i.e. words...... that cannot be classified in terms of the traditional lexical categories Verb, Noun, Adjective or Adverb. Flexible words can - without special morphosyntactic marking - serve in functions for which other languages must employ members of two or more of the four traditional, 'specialised' word classes. Thus......, flexible words are underspecified for communicative functions like 'predicating' (verbal function), 'referring' (nominal function) or 'modifying' (a function typically associated with adjectives and e.g. manner adverbs). Even though linguists have been aware of flexible world classes for more than...
Directory of Open Access Journals (Sweden)
Emine Bala
2015-12-01
Full Text Available Storytelling is one of the oldest ways of education and oral tradition that is continuously being used to transfer the previous nation‘s cultures, tradition and customs. It constructs a bridge between the new and the old. Storytelling in EFL classes usually provides a meaningful context, interesting atmosphere and is used as a tool to highly motivate students. Although it seems to be mostly based on speaking, it is used to promote other skills such as writing, reading, and listening. Storytelling is mainly regarded to be grounded on imitation and repetition; nevertheless many creative activities can be implemented in the classroom since this method directs learners to use their imaginations. This study discusses the importance of storytelling as a teaching method, and it outlines the advantages of storytelling in EFL classes.
Queen elizabeth class battleships
Brown, Les
2010-01-01
The 'ShipCraft' series provides in-depth information about building and modifying model kits of famous warship types. Lavishly illustrated, each book takes the modeller through a brief history of the subject class, highlighting differences between sister-ships and changes in their appearance over their careers. This includes paint schemes and camouflage, featuring colour profiles and highly detailed line drawings and scale plans. The modelling section reviews the strengths and weaknesses of available kits, lists commercial accessory sets for super-detailing of the ships, and provides hints on modifying and improving the basic kit. This is followed by an extensive photographic survey of selected high-quality models in a variety of scales, and the book concludes with a section on research references - books, monographs, large-scale plans and relevant websites.This volume covers the five ships of the highly successful Queen Elizabeth class, a design of fast battleship that set the benchmark for the last generati...
World Class Facilities Management
DEFF Research Database (Denmark)
Malmstrøm, Ole Emil; Jensen, Per Anker
2013-01-01
Alle der med entusiasme arbejder med Facilities Management drømmer om at levere World Class. DFM drømmer om at skabe rammer og baggrund for, at vi i Danmark kan bryste os at være blandt de førende på verdensplan. Her samles op på, hvor tæt vi er på at nå drømmemålet.......Alle der med entusiasme arbejder med Facilities Management drømmer om at levere World Class. DFM drømmer om at skabe rammer og baggrund for, at vi i Danmark kan bryste os at være blandt de førende på verdensplan. Her samles op på, hvor tæt vi er på at nå drømmemålet....
Impact of marine reserve on maximum sustainable yield in a traditional prey-predator system
Paul, Prosenjit; Kar, T. K.; Ghorai, Abhijit
2018-01-01
Multispecies fisheries management requires managers to consider the impact of fishing activities on several species as fishing impacts both targeted and non-targeted species directly or indirectly in several ways. The intended goal of traditional fisheries management is to achieve maximum sustainable yield (MSY) from the targeted species, which on many occasions affect the targeted species as well as the entire ecosystem. Marine reserves are often acclaimed as the marine ecosystem management tool. Few attempts have been made to generalize the ecological effects of marine reserve on MSY policy. We examine here how MSY and population level in a prey-predator system are affected by the low, medium and high reserve size under different possible scenarios. Our simulation works shows that low reserve area, the value of MSY for prey exploitation is maximum when both prey and predator species have fast movement rate. For medium reserve size, our analysis revealed that the maximum value of MSY for prey exploitation is obtained when prey population has fast movement rate and predator population has slow movement rate. For high reserve area, the maximum value of MSY for prey's exploitation is very low compared to the maximum value of MSY for prey's exploitation in case of low and medium reserve. On the other hand, for low and medium reserve area, MSY for predator exploitation is maximum when both the species have fast movement rate.
Emine Bala
2015-01-01
Storytelling is one of the oldest ways of education and oral tradition that is continuously being used to transfer the previous nation‘s cultures, tradition and customs. It constructs a bridge between the new and the old. Storytelling in EFL classes usually provides a meaningful context, interesting atmosphere and is used as a tool to highly motivate students. Although it seems to be mostly based on speaking, it is used to promote other skills such as writing, reading, and listening. Storytel...
1983-04-25
The series of ships, named after all the provinces of Ecuador , include: --CA 11 ESMERALDAS, laid down 27 September 1979, launched 11 October 1980... LOJA , laid down 25 March 1981, launched 27 February 1982; fitting out at CNR Ancona. The building program, on schedule so far, calls for the entire class...built and are still building in 16 units for foreign navies (Libya, Ecuador , Iraq) with four possible armament alternatives. In particular, they
Benach, Joan; Amable, Marcelo
2004-05-01
Social classes and poverty are two key social determinants fundamental to understand how disease and health inequalities are produced. During the 90's in Spain there has been a notable oscillation in the inequality and poverty levels, with an increase in the middle of the decade when new forms of social exclusion, high levels of unemployment and great difficulties in accessing the labour market, especially for those workers with less resources, emerged. Today society is still characterized by a clear social stratification and the existence of social classes with a predominance of high levels of unemployment and precarious jobs, and where poverty is an endemic social problem much worse than the EU average. To diminish health inequalities and to improve the quality of life will depend very much on the reduction of the poverty levels and the improvement of equal opportunities and quality of employment. To increase understanding of how social class and poverty affect public health, there is a need to improve the quality of both information and research, and furthermore planners and political decision makers must take into account those determinants when undertaking disease prevention and health promotion.
Observational differences between Swift GRB classes
International Nuclear Information System (INIS)
Balazs, L. G.; Horvath, I.; Bagoly, Zs.; Szecsi, D.; Veres, P.
2011-01-01
There are accumulating evidences that GRBs have an intermediate group, beside the short and long classes. Based on the observational data available in the Swift table we compared the observational γ and X ray properties of GRBs making use the discriminant analysis of the multivariate mathematical statistics. The analysis resulted in two canonical discriminating functions giving the maximum separation between the groups. The first discriminating function is dominated by the γ and X-ray fluence while the second one is almost identical with the photon index.
Software engineering processes for Class D missions
Killough, Ronnie; Rose, Debi
2013-09-01
Software engineering processes are often seen as anathemas; thoughts of CMMI key process areas and NPR 7150.2A compliance matrices can motivate a software developer to consider other career fields. However, with adequate definition, common-sense application, and an appropriate level of built-in flexibility, software engineering processes provide a critical framework in which to conduct a successful software development project. One problem is that current models seem to be built around an underlying assumption of "bigness," and assume that all elements of the process are applicable to all software projects regardless of size and tolerance for risk. This is best illustrated in NASA's NPR 7150.2A in which, aside from some special provisions for manned missions, the software processes are to be applied based solely on the criticality of the software to the mission, completely agnostic of the mission class itself. That is, the processes applicable to a Class A mission (high priority, very low risk tolerance, very high national significance) are precisely the same as those applicable to a Class D mission (low priority, high risk tolerance, low national significance). This paper will propose changes to NPR 7150.2A, taking mission class into consideration, and discuss how some of these changes are being piloted for a current Class D mission—the Cyclone Global Navigation Satellite System (CYGNSS).
Evaluation of maximum power point tracking in hydrokinetic energy conversion systems
Directory of Open Access Journals (Sweden)
Jahangir Khan
2015-11-01
Full Text Available Maximum power point tracking is a mature control issue for wind, solar and other systems. On the other hand, being a relatively new technology, detailed discussion on power tracking of hydrokinetic energy conversion systems are generally not available. Prior to developing sophisticated control schemes for use in hydrokinetic systems, existing know-how in wind or solar technologies can be explored. In this study, a comparative evaluation of three generic classes of maximum power point scheme is carried out. These schemes are (a tip speed ratio control, (b power signal feedback control, and (c hill climbing search control. In addition, a novel concept for maximum power point tracking: namely, extremum seeking control is introduced. Detailed and validated system models are used in a simulation environment. Potential advantages and drawbacks of each of these schemes are summarised.
International Nuclear Information System (INIS)
Setjo, Renaningsih
2000-01-01
Piping stress analysis for PZR/Auxiliary Spray Lines Nuclear Power Plant AV Unit I(PWR Type) has been carried out. The purpose of this analysis is to establish a maximum allowable load that is permitted at the time of need by placing lead shielding on the piping system on class 1 pipe, Pressurizer/Auxiliary Spray Lines (PZR/Aux.) Reactor Coolant Loop 1 and 4 for NPP AV Unit one in the mode 5 and 6 during outage. This analysis is intended to reduce the maximum amount of radiation dose for the operator during ISI ( In service Inspection) period.The result shown that the maximum allowable loads for 4 inches lines for PZR/Auxiliary Spray Lines is 123 lbs/feet
Variations in tooth size and arch dimensions in Malay schoolchildren.
Hussein, Khalid W; Rajion, Zainul A; Hassan, Rozita; Noor, Siti Noor Fazliah Mohd
2009-11-01
To compare the mesio-distal tooth sizes and dental arch dimensions in Malay boys and girls with Class I, Class II and Class III malocclusions. The dental casts of 150 subjects (78 boys, 72 girls), between 12 and 16 years of age, with Class I, Class II and Class III malocclusions were used. Each group consisted of 50 subjects. An electronic digital caliper was used to measure the mesio-distal tooth sizes of the upper and lower permanent teeth (first molar to first molar), the intercanine and intermolar widths. The arch lengths and arch perimeters were measured with AutoCAD software (Autodesk Inc., San Rafael, CA, U.S.A.). The mesio-distal dimensions of the upper lateral incisors and canines in the Class I malocclusion group were significantly smaller than the corresponding teeth in the Class III and Class II groups, respectively. The lower canines and first molars were significantly smaller in the Class I group than the corresponding teeth in the Class II group. The lower intercanine width was significantly smaller in the Class II group as compared with the Class I group, and the upper intermolar width was significantly larger in Class III group as compared with the Class II group. There were no significant differences in the arch perimeters or arch lengths. The boys had significantly wider teeth than the girls, except for the left lower second premolar. The boys also had larger upper and lower intermolar widths and lower intercanine width than the girls. Small, but statistically significant, differences in tooth sizes are not necessarily accompanied by significant arch width, arch length or arch perimeter differences. Generally, boys have wider teeth, larger lower intercanine width and upper and lower intermolar widths than girls.
Enabling Event Tracing at Leadership-Class Scale through I/O Forwarding Middleware
Energy Technology Data Exchange (ETDEWEB)
Ilsche, Thomas [Technische Universitat Dresden; Schuchart, Joseph [Technische Universitat Dresden; Cope, Joseph [Argonne National Laboratory (ANL); Kimpe, Dries [Argonne National Laboratory (ANL); Jones, Terry R [ORNL; Knuepfer, Andreas [Technische Universitat Dresden; Iskra, Kamil [Argonne National Laboratory (ANL); Ross, Robert [Argonne National Laboratory (ANL); Nagel, Wolfgang E. [Technische Universitat Dresden; Poole, Stephen W [ORNL
2012-01-01
Event tracing is an important tool for understanding the performance of parallel applications. As concurrency increases in leadership-class computing systems, the quantity of performance log data can overload the parallel file system, perturbing the application being observed. In this work we present a solution for event tracing at leadership scales. We enhance the I/O forwarding system software to aggregate and reorganize log data prior to writing to the storage system, significantly reducing the burden on the underlying file system for this type of traffic. Furthermore, we augment the I/O forwarding system with a write buffering capability to limit the impact of artificial perturbations from log data accesses on traced applications. To validate the approach, we modify the Vampir tracing tool to take advantage of this new capability and show that the approach increases the maximum traced application size by a factor of 5x to more than 200,000 processors.
Measuring wage effects of plant size
DEFF Research Database (Denmark)
Albæk, Karsten; Arai, Mahmood; Asplund, Rita
1998-01-01
There are large plant size–wage effects in the Nordic countries after taking into account individual and job characteristics as well as systematical sorting of the workers into various plant-sizes. The plant size–wage elasticities we obtain are, in contrast to other dimensions of the wage distrib......–wage elasticity. Our results indicate that using size–class midpoints yields essentially the same results as using exact measures of plant size...
Loneliness and Ethnic Composition of the School Class
DEFF Research Database (Denmark)
Madsen, Katrine Rich; Damsgaard, Mogens Trab; Rubin, Mark
2016-01-01
not belong to the ethnic majority in the school class had increased odds for loneliness compared to adolescents that belonged to the ethnic majority. Furthermore, having more same-ethnic classmates lowered the odds for loneliness. We did not find any statistically significant association between the ethnic...... of school classes for loneliness in adolescence. The present research aimed to address this gap by exploring the association between loneliness and three dimensions of the ethnic composition in the school class: (1) membership of ethnic majority in the school class, (2) the size of own ethnic group...... in the school class, and (3) the ethnic diversity of the school class. We used data from the Danish 2014 Health Behaviour in School-aged Children survey: a nationally representative sample of 4383 (51.2 % girls) 11-15-year-olds. Multilevel logistic regression analyses revealed that adolescents who did...
Maximum entropy production rate in quantum thermodynamics
Energy Technology Data Exchange (ETDEWEB)
Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)
2010-06-01
In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible
A mini-exhibition with maximum content
Laëtitia Pedroso
2011-01-01
The University of Budapest has been hosting a CERN mini-exhibition since 8 May. While smaller than the main travelling exhibition it has a number of major advantages: its compact design alleviates transport difficulties and makes it easier to find suitable venues in the Member States. Its content can be updated almost instantaneously and it will become even more interactive and high-tech as time goes by. The exhibition on display in Budapest. The purpose of CERN's new mini-exhibition is to be more interactive and easier to install. Due to its size, the main travelling exhibition cannot be moved around quickly, which is why it stays in the same country for 4 to 6 months. But this means a long waiting list for the other Member States. To solve this problem, the Education Group has designed a new exhibition, which is smaller and thus easier to install. Smaller maybe, but no less rich in content, as the new exhibition conveys exactly the same messages as its larger counterpart. However, in the slimm...
Simultaneous maximum a posteriori longitudinal PET image reconstruction
Ellis, Sam; Reader, Andrew J.
2017-09-01
Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.
Size effects on cavitation instabilities
DEFF Research Database (Denmark)
Niordson, Christian Frithiof; Tvergaard, Viggo
2006-01-01
growth is here analyzed for such cases. A finite strain generalization of a higher order strain gradient plasticity theory is applied for a power-law hardening material, and the numerical analyses are carried out for an axisymmetric unit cell containing a spherical void. In the range of high stress...... triaxiality, where cavitation instabilities are predicted by conventional plasticity theory, such instabilities are also found for the nonlocal theory, but the effects of gradient hardening delay the onset of the instability. Furthermore, in some cases the cavitation stress reaches a maximum and then decays...... as the void grows to a size well above the characteristic material length....
Class impressions : Higher social class elicits lower prosociality
Van Doesum, Niels J.; Tybur, Joshua M.; Van Lange, Paul A.M.
2017-01-01
Social class predicts numerous important life outcomes and social orientations. To date, literature has mainly examined how an individual's own class shapes interactions with others. But how prosocially do people treat others they perceive as coming from lower, middle, or higher social classes?
Class Action and Class Settlement in a European Perspective
DEFF Research Database (Denmark)
Werlauff, Erik
2013-01-01
The article analyses the options for introducing common European rules on class action lawsuits with an opt-out-model in individual cases. An analysis is made of how the risks of misuse of class actions can be prevented. The article considers the Dutch rules on class settlements (the WCAM procedure...
Weighted Maximum-Clique Transversal Sets of Graphs
Chuan-Min Lee
2011-01-01
A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...
An "expanded" class perspective
DEFF Research Database (Denmark)
Steur, Luisa Johanna
2014-01-01
Following the police raid on the ‘Muthanga’ land occupation by Adivasi (‘indigenous’) activists in Kerala, India, in February 2003, intense public debate erupted about the fate of Adivasis in this ‘model’ development state. Most commentators saw the land occupation either as the fight...... analysis, as elaborated in Marxian anthropology, this article provides an alternative to the liberal-culturalist explanation of indigenism in Kerala, arguing instead that contemporary class processes—as experienced close to the skin by the people who decided to participate in the Muthanga struggle......—were what shaped their decision to embrace indigenism....
Class B0631+519: Last of the Class Lenses
Energy Technology Data Exchange (ETDEWEB)
York, Tom; Jackson, N.; Browne, I.W.A.; Koopmans, L.V.E.; McKean, J.P.; Norbury, M.A.; Biggs, A.D.; Blandford, R.D.; de Bruyn, A.G.; Fassnacht, C.D.; Myers, S.T.; Pearson, T.J.; Phillips, P.M.; Readhead, A.C.S.; Rusin, D.; Wilkinson, P.N.; /Jodrell Bank /Kapteyn Astron. Inst., Groningen /UC, Davis /JIVE, Dwingeloo /KIPAC, Menlo Park /NFRA,
2005-05-31
We report the discovery of the new gravitational lens system CLASS B0631+519. Imaging with the VLA, MERLIN and the VLBA reveals a doubly-imaged flat-spectrum radio core, a doubly-imaged steep-spectrum radio lobe and possible quadruply-imaged emission from a second lobe. The maximum separation between the lensed images is 1.16 arcsec. High resolution mapping with the VLBA at 5 GHz resolves the most magnified image of the radio core into a number of sub-components spread across approximately 20 mas. No emission from the lensing galaxy or an odd image is detected down to 0.31 mJy (5{sigma}) at 8.4 GHz. Optical and near-infrared imaging with the ACS and NICMOS cameras on the HST show that there are two galaxies along the line of sight to the lensed source, as previously discovered by optical spectroscopy. We find that the foreground galaxy at z=0.0896 is a small irregular, and that the other, at z=0.6196 is a massive elliptical which appears to contribute the majority of the lensing effect. The host galaxy of the lensed source is detected in the HST near-infrared imaging as a set of arcs, which form a nearly complete Einstein ring. Mass modeling using non-parametric techniques can reproduce the near-infrared observations and indicates that the small irregular galaxy has a (localized) effect on the flux density distribution in the Einstein ring at the 5-10% level.
Maiorana-McFarland class: Degree optimization and algebraic properties
DEFF Research Database (Denmark)
Pasalic, Enes
2006-01-01
degree of functions in the extended Maiorana-McFarland (MM) class (nonlinear resilient functions F : GF (2)(n) -> GF (2)(m) derived from linear codes). We also show that in the Boolean case, the same subclass seems not to have an optimized algebraic immunity, hence not providing a maximum resistance......In this paper, we consider a subclass of the Maiorana-McFarland class used in the design of resilient nonlinear Boolean functions. We show that these functions allow a simple modification so that resilient Boolean functions of maximum algebraic degree may be generated instead of suboptimized degree...... in the original class. Preserving a high-nonlinearity value immanent to the original construction method, together with the degree optimization gives in many cases functions with cryptographic properties superior to all previously known construction methods. This approach is then used to increase the algebraic...
PATTERNS OF THE MAXIMUM RAINFALL AMOUNTS REGISTERED IN 24 HOURS WITHIN THE OLTENIA PLAIN
Directory of Open Access Journals (Sweden)
ALINA VLĂDUŢ
2012-03-01
Full Text Available Patterns of the maximum rainfall amounts registered in 24 hours within the Oltenia Plain. The present study aims at rendering the main features of the maximum rainfall amounts registered in 24 h within the Oltenia Plain. We used 30-year time series (1980-2009 for seven meteorological stations. Generally, the maximum amounts in 24 h display the same pattern as the monthly mean amounts, namely higher values in the interval May-October. In terms of mean values, the highest amounts are registered in the western and northern extremity of the plain. The maximum values generally exceed 70 mm at all meteorological stations: D.T. Severin, 224 mm, July 1999; Slatina, 104.8 mm, August 2002; Caracal, 92.2 m, July 1991; Bechet, 80.8 mm, July 2006; Craiova, 77.6 mm, April 2003. During the cold season, there was noticed a greater uniformity all over the plain, due to the cyclonic origin of rainfalls compared to the warm season, when thermal convection is quite active and it triggers local showers. In order to better emphasize the peculiarities of this parameter, we have calculated the frequency on different value classes (eight classes, as well as the probability of appearance of different amounts. Thus, it resulted that the highest frequency (25-35% is held by the first two classes of values (0-10 mm; 10.1-20 mm. The lowest frequency is registered in case of the amounts of more than 100 mm, which generally display a probability of occurrence of less than 1% and only in the western and eastern extremities of the plain.
The Maximum Cumulative Ratio (MCR) quantifies the degree to which a single chemical drives the cumulative risk of an individual exposed to multiple chemicals. Phthalates are a class of chemicals with ubiquitous exposures in the general population that have the potential to cause ...
Energetic constraints, size gradients, and size limits in benthic marine invertebrates.
Sebens, Kenneth P
2002-08-01
Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.
Body size distribution of the dinosaurs.
Directory of Open Access Journals (Sweden)
Eoin J O'Gorman
Full Text Available The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size.
Body size distribution of the dinosaurs.
O'Gorman, Eoin J; Hone, David W E
2012-01-01
The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size.
Body Size Distribution of the Dinosaurs
O’Gorman, Eoin J.; Hone, David W. E.
2012-01-01
The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size. PMID:23284818
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Directory of Open Access Journals (Sweden)
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
Teachers, Social Class and Underachievement
Dunne, Mairead; Gazeley, Louise
2008-01-01
Addressing the "the social class attainment gap" in education has become a government priority in England. Despite multiple initiatives, however, little has effectively addressed the underachievement of working-class pupils within the classroom. In order to develop clearer understandings of working-class underachievement at this level,…
Mapping the Social Class Structure
DEFF Research Database (Denmark)
Toubøl, Jonas; Grau Larsen, Anton
2017-01-01
This article develops a new explorative method for deriving social class categories from patterns of occupational mobility. In line with Max Weber, our research is based on the notion that, if class boundaries do not inhibit social mobility then the class categories are of little value. Thus...
Maximum power per VA control of vector controlled interior ...
Indian Academy of Sciences (India)
Thakur Sumeet Singh
2018-04-11
Apr 11, 2018 ... Department of Electrical Engineering, Indian Institute of Technology Delhi, New ... The MPVA operation allows maximum-utilization of the drive-system. ... Permanent magnet motor; unity power factor; maximum VA utilization; ...
Electron density distribution in Si and Ge using multipole, maximum ...
Indian Academy of Sciences (India)
Si and Ge has been studied using multipole, maximum entropy method (MEM) and ... and electron density distribution using the currently available versatile ..... data should be subjected to maximum possible utility for the characterization of.
Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan B [Los Alamos National Laboratory; Larsen, Edward W [Los Alamos National Laboratory; Densmore, Jeffery D [Los Alamos National Laboratory
2010-12-15
It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle.' Previous attempts at prescribing a maximum value of the time-step size {Delta}{sub t} that is sufficient to eliminate these violations have recommended a {Delta}{sub t} that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size {Delta}{sub x}. This explicitly demonstrates that the effect of coarsening {Delta}{sub x} is to reduce the limitation on {Delta}{sub t}, which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent timestep restriction can impact IMC solution algorithms.
Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations
International Nuclear Information System (INIS)
Wollaber, Allan B.; Larsen, Edward W.; Densmore, Jeffery D.
2011-01-01
It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle'. Previous attempts at prescribing a maximum value of the time-step size Δ t that is sufficient to eliminate these violations have recommended a Δ t that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size Δ x . This explicitly demonstrates that the effect of coarsening Δ x is to reduce the limitation on Δ t , which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent time-step restriction can impact IMC solution algorithms. (author)
Characterization of PGL(2, p) by its order and one conjugacy class ...
Indian Academy of Sciences (India)
validity of a conjecture of J. G. Thompson is generalized to the group PGL(2, p) by a new way. Keywords. Finite group; conjugacy class size; Thompson's ..... class size, J. Inequalities Appl. (2012) 310. [13] Conway J H, Curtis R T, Norton S P, Parker R A and Wilson R A, Atlas of Finite Groups. (1985) (Oxford: Clarendon ...
YOHKOH Observations at the Y2K Solar Maximum
Aschwanden, M. J.
1999-05-01
Yohkoh will provide simultaneous co-aligned soft X-ray and hard X-ray observations of solar flares at the coming solar maximum. The Yohkoh Soft X-ray Telescope (SXT) covers the approximate temperature range of 2-20 MK with a pixel size of 2.46\\arcsec, and thus complements ideally the EUV imagers sensitive in the 1-2 MK plasma, such as SoHO/EIT and TRACE. The Yohkoh Hard X-ray Telescope (HXT) offers hard X-ray imaging at 20-100 keV at a time resolution of down to 0.5 sec for major events. In this paper we review the major SXT and HXT results from Yohkoh solar flare observations, and anticipate some of the key questions that can be addressed through joint observations with other ground and space-based observatories. This encompasses the dynamics of flare triggers (e.g. emerging flux, photospheric shear, interaction of flare loops in quadrupolar geometries, large-scale magnetic reconfigurations, eruption of twisted sigmoid structures, coronal mass ejections), the physics of particle dynamics during flares (acceleration processes, particle propagation, trapping, and precipitation), and flare plasma heating processes (chromospheric evaporation, coronal energy loss by nonthermal particles). In particular we will emphasize on how Yohkoh data analysis is progressing from a qualitative to a more quantitative science, employing 3-dimensional modeling and numerical simulations.
Paddle River Dam : review of probable maximum flood
Energy Technology Data Exchange (ETDEWEB)
Clark, D. [UMA Engineering Ltd., Edmonton, AB (Canada); Neill, C.R. [Northwest Hydraulic Consultants Ltd., Edmonton, AB (Canada)
2008-07-01
The Paddle River Dam was built in northern Alberta in the mid 1980s for flood control. According to the 1999 Canadian Dam Association (CDA) guidelines, this 35 metre high, zoned earthfill dam with a spillway capacity sized to accommodate a probable maximum flood (PMF) is rated as a very high hazard. At the time of design, it was estimated to have a peak flow rate of 858 centimetres. A review of the PMF in 2002 increased the peak flow rate to 1,890 centimetres. In light of a 2007 revision of the CDA safety guidelines, the PMF was reviewed and the inflow design flood (IDF) was re-evaluated. This paper discussed the levels of uncertainty inherent in PMF determinations and some difficulties encountered with the SSARR hydrologic model and the HEC-RAS hydraulic model in unsteady mode. The paper also presented and discussed the analysis used to determine incremental damages, upon which a new IDF of 840 m{sup 3}/s was recommended. The paper discussed the PMF review, modelling methodology, hydrograph inputs, and incremental damage of floods. It was concluded that the PMF review, involving hydraulic routing through the valley bottom together with reconsideration of the previous runoff modeling provides evidence that the peak reservoir inflow could reasonably be reduced by approximately 20 per cent. 8 refs., 5 tabs., 8 figs.
Installation of the MAXIMUM microscope at the ALS
International Nuclear Information System (INIS)
Ng, W.; Perera, R.C.C.; Underwood, J.H.; Singh, S.; Solak, H.; Cerrina, F.
1995-10-01
The MAXIMUM scanning x-ray microscope, developed at the Synchrotron Radiation Center (SRC) at the University of Wisconsin, Madison was implemented on the Advanced Light Source in August of 1995. The microscope's initial operation at SRC successfully demonstrated the use of multilayer coated Schwarzschild objective for focusing 130 eV x-rays to a spot size of better than 0.1 micron with an electron energy resolution of 250meV. The performance of the microscope was severely limited, because of the relatively low brightness of SRC, which limits the available flux at the focus of the microscope. The high brightness of the ALS is expected to increase the usable flux at the sample by a factor of 1,000. The authors will report on the installation of the microscope on bending magnet beamline 6.3.2 at the ALS and the initial measurement of optical performance on the new source, and preliminary experiments with surface chemistry of HF etched Si will be described
Ecomorphology of a size-structured tropical freshwater fish community
Piet, G.J.
1998-01-01
Among nine species of a tropical community ecomorphological correlates were sought throughout ontogeny. Ontogenetic changes were distinguished by establishing six pre-defined size- classes. Morphometric data associated with feeding were compared by canonical correspondence analysis to dietary data.
40 CFR 141.13 - Maximum contaminant levels for turbidity.
2010-07-01
... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...
Maximum Power Training and Plyometrics for Cross-Country Running.
Ebben, William P.
2001-01-01
Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…
13 CFR 107.840 - Maximum term of Financing.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...
7 CFR 3565.210 - Maximum interest rate.
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...
Drought effect on weaning weight and efficiency relative to cow size in semiarid rangeland.
Scasta, J D; Henderson, L; Smith, T
2015-12-01
Cow size has been suggested to be an important consideration for selecting cattle to match their production environment. Over the last several decades, the trend in genetic selection for maximum growth has led to gradual increases in beef cow size. An unrelated trend during this same period in the western United States has been an increase in temperature, drought frequency, and drought severity. Due to the potential influence of the increasing cow size trend on nutritional maintenance costs and production, we assessed the effect of cow size on weaning weight and efficiency in relation to drought on a semiarid high-elevation ranch in Wyoming. This study addresses a lack of empirical studies on the interaction between cow size and drought. We measured calf weaning weights of 80 Angus × Gelbvieh cows from 2011 to 2014 and assessed how drought affected weaning weights, efficiency (considered as calf weight relative to cow weight), intake requirements, and potential herd sizes relative to cow size. We stratified cows into 5 weight classes (453, 498, 544, 589, and 634 kg) as a proxy for cow size and adjusted weaning weights to a 210-d calf sex adjusted value. Cow size was a significant factor every year, with different cow sizes having advantages or disadvantages different years relative to weaning weight. However, efficiency for the smallest cows (453 kg) was always greater than efficiency for largest cows (634 kg; cows was greater in the driest year (0.41 ± 0.02) than efficiency of the largest cows in the wettest years (0.37 ± 0.01). The change in efficiency (ΔE) between wet and dry years was 0.18 for the smallest cow size and 0.02 for the largest cow size, and ΔE decreased as cow size increased. This is an indication of the ability of smaller cows to lower maintenance requirements in response to changes in the production environment but with optimal upside potential when conditions are favorable. These results indicate large cows (589 to 634 kg) do not maximize
Directory of Open Access Journals (Sweden)
Fadia M. Al-Hummayani
2016-04-01
Full Text Available The treatment of deep anterior crossbite is technically challenging due to the difficulty of placing traditional brackets with fixed appliances. This case report represents a none traditional treatment modality to treat deep anterior crossbite in an adult pseudo class III malocclusion complicated by severely retruded, supraerupted upper and lower incisors. Treatment was carried out in 2 phases. Phase I treatment was performed by removable appliance “modified Hawley appliance with inverted labial bow,” some modifications were carried out to it to suit the presented case. Positive overbite and overjet was accomplished in one month, in this phase with minimal forces exerted on the lower incisors. Whereas, phase II treatment was performed with fixed appliances (braces to align teeth and have proper over bite and overjet and to close posterior open bite, this phase was accomplished within 11 month.
Morio, Hiroshi
2003-10-01
Economy class syndrome is venous thromboembolism following air travel. This syndrome was firstly reported in 1946, and many cases have been reported since 1990s. Low air pressure and low humidity in the aircraft cabin may contribute to the mechanism of this syndrome. Risk factors for venous thrombosis in the plane were old age, small height, obesity, hormonal therapy, malignancy, smoking, pregnancy or recent parturition, recent trauma or operation, chronic disease and history of venous thrombosis. In Japan, the feminine gender is also risk factor though reason was not well known. For prophylaxis, adequate fluid intake and leg exercise are recommended to all passengers. For passengers with high risk, prophylactic measures such as compression stockings, aspirin or low molecular weight heparin should be considered.
Estimation of Maximum Allowable PV Connection to LV Residential Power Networks
DEFF Research Database (Denmark)
Demirok, Erhan; Sera, Dezso; Teodorescu, Remus
2011-01-01
Maximum photovoltaic (PV) hosting capacity of low voltage (LV) power networks is mainly restricted by either thermal limits of network components or grid voltage quality resulted from high penetration of distributed PV systems. This maximum hosting capacity may be lower than the available solar...... potential of geographic area due to power network limitations even though all rooftops are fully occupied with PV modules. Therefore, it becomes more of an issue to know what exactly limits higher PV penetration level and which solutions should be engaged efficiently such as over sizing distribution...
Maximum likelihood estimation of the position of a radiating source in a waveguide
International Nuclear Information System (INIS)
Hinich, M.J.
1979-01-01
An array of sensors is receiving radiation from a source of interest. The source and the array are in a one- or two-dimensional waveguide. The maximum-likelihood estimators of the coordinates of the source are analyzed under the assumptions that the noise field is Gaussian. The Cramer-Rao lower bound is of the order of the number of modes which define the source excitation function. The results show that the accuracy of the maximum likelihood estimator of source depth using a vertical array in a infinite horizontal waveguide (such as the ocean) is limited by the number of modes detected by the array regardless of the array size
Size structures sensory hierarchy in ocean life
DEFF Research Database (Denmark)
Martens, Erik Andreas; Wadhwa, Navish; Jacobsen, Nis Sand
2015-01-01
Life in the ocean is shaped by the trade-off between a need to encounter other organisms for feeding or mating, and to avoid encounters with predators. Avoiding or achieving encounters necessitates an efficient means of collecting the maximum possible information from the surroundings through...... predict the body size limits for various sensory modes, which align very well with size ranges found in literature. The treatise of all ocean life, from unicellular organisms to whales, demonstrates how body size determines available sensing modes, and thereby acts as a major structuring factor of aquatic...
Sugar export limits size of conifer needles
DEFF Research Database (Denmark)
Rademaker, Hanna; Zwieniecki, Maciej A.; Bohr, Tomas
2017-01-01
Plant leaf size varies by more than three orders of magnitude, from a few millimeters to over one meter. Conifer leaves, however, are relatively short and the majority of needles are no longer than 6 cm. The reason for the strong confinement of the trait-space is unknown. We show that sugars...... does not contribute to sugar flow. Remarkably, we find that the size of the active part does not scale with needle length. We predict a single maximum needle size of 5 cm, in accord with data from 519 conifer species. This could help rationalize the recent observation that conifers have significantly...
Network class superposition analyses.
Directory of Open Access Journals (Sweden)
Carl A B Pearson
Full Text Available Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30 for the yeast cell cycle process, considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.
Understanding Class in Contemporary Societies
DEFF Research Database (Denmark)
Harrits, Gitte Sommer
2007-01-01
In this paper, I argue that claims about the death of class and the coming of the classless society are premature. Such claims are seldom genuinely empirical, and the theoretical argument often refers to a simple and therefore easily dismissible concept of class. By rejecting the concept of class...... altogether, sociological theory runs the risk of loosing the capacity for analysing stratification and vertical differentiation of power and freedom, which in late modernity seem to be a of continuing importance. Hence, I argue that although class analysis faces a number of serious challenges, it is possible...... to reinvent class analysis. The sociology of Pierre Bourdieu in many ways introduces an appropriate paradigm, and the paper therefore critically discusses Bourdieu's concept of class. Since the "Bourdieuan" class concept is primarily epistemological, i.e. a research strategy more than a theory, empirical...
Wodtke, Geoffrey T
2016-03-01
This study outlines a theory of social class based on workplace ownership and authority relations, and it investigates the link between social class and growth in personal income inequality since the 1980s. Inequality trends are governed by changes in between-class income differences, changes in the relative size of different classes, and changes in within-class income dispersion. Data from the General Social Survey are used to investigate each of these changes in turn and to evaluate their impact on growth in inequality at the population level. Results indicate that between-class income differences grew by about 60% since the 1980s and that the relative size of different classes remained fairly stable. A formal decomposition analysis indicates that changes in the relative size of different social classes had a small dampening effect and that growth in between-class income differences had a large inflationary effect on trends in personal income inequality.
Wodtke, Geoffrey T.
2016-01-01
This study outlines a theory of social class based on workplace ownership and authority relations, and it investigates the link between social class and growth in personal income inequality since the 1980s. Inequality trends are governed by changes in between-class income differences, changes in the relative size of different classes, and changes in within-class income dispersion. Data from the General Social Survey are used to investigate each of these changes in turn and to evaluate their impact on growth in inequality at the population level. Results indicate that between-class income differences grew by about 60 percent since the 1980s and that the relative size of different classes remained fairly stable. A formal decomposition analysis indicates that changes in the relative size of different social classes had a small dampening effect and that growth in between-class income differences had a large inflationary effect on trends in personal income inequality. PMID:27087695
47 CFR 73.1570 - Modulation levels: AM, FM, TV and Class A TV aural.
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Modulation levels: AM, FM, TV and Class A TV... levels: AM, FM, TV and Class A TV aural. (a) The percentage of modulation is to be maintained at as high a level as is consistent with good quality of transmission and good broadcast service, with maximum...
2010-07-01
... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...
EFFECT OF FARM SIZE AND FREQUENCY OF CUTTING ON ...
African Journals Online (AJOL)
EFFECT OF FARM SIZE AND FREQUENCY OF CUTTING ON OUTPUT OF ... the use of Ordinary Least Square (OLS) estimation technique was used in analyzing ... frequency of cutting that would produce maximum output of the vegetable as ...
Exploring social class: voices of inter-class couples.
McDowell, Teresa; Melendez-Rhodes, Tatiana; Althusius, Erin; Hergic, Sara; Sleeman, Gillian; Ton, Nicky Kieu My; Zimpfer-Bak, A J
2013-01-01
Social class is not often discussed or examined in-depth in couple and family therapy research and literature even though social class shapes familial relationships and is considered an important variable in marital satisfaction. In this qualitative study, we explored the perceptions of eight couples who made lasting commitments across class lines by asking them about the impact of their social class backgrounds on their relationships. Three categories of themes emerged including: (a) differences and similarities in values and attitudes toward education, work, money, and class awareness/classism, (b) relationship issues involving families of origin, friends, and class-based couple conflict, and (c) differences in economic resources, social capital and privileges/opportunities. Implications for assessment and treatment of couples are included. © 2012 American Association for Marriage and Family Therapy.
The maximum entropy production and maximum Shannon information entropy in enzyme kinetics
Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš
2018-04-01
We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.
Tandberg-Hanssen, E.; Cheng, C. C.; Woodgate, B. E.; Brandt, J. C.; Chapman, R. D.; Athay, R. G.; Beckers, J. M.; Bruner, E. C.; Gurman, J. B.; Hyder, C. L.
1981-01-01
The Ultraviolet Spectrometer and Polarimeter on the Solar Maximum Mission spacecraft is described. It is pointed out that the instrument, which operates in the wavelength range 1150-3600 A, has a spatial resolution of 2-3 arcsec and a spectral resolution of 0.02 A FWHM in second order. A Gregorian telescope, with a focal length of 1.8 m, feeds a 1 m Ebert-Fastie spectrometer. A polarimeter comprising rotating Mg F2 waveplates can be inserted behind the spectrometer entrance slit; it permits all four Stokes parameters to be determined. Among the observing modes are rasters, spectral scans, velocity measurements, and polarimetry. Examples of initial observations made since launch are presented.
Variation of Probable Maximum Precipitation in Brazos River Basin, TX
Bhatia, N.; Singh, V. P.
2017-12-01
The Brazos River basin, the second-largest river basin by area in Texas, generates the highest amount of flow volume of any river in a given year in Texas. With its headwaters located at the confluence of Double Mountain and Salt forks in Stonewall County, the third-longest flowline of the Brazos River traverses within narrow valleys in the area of rolling topography of west Texas, and flows through rugged terrains in mainly featureless plains of central Texas, before its confluence with Gulf of Mexico. Along its major flow network, the river basin covers six different climate regions characterized on the basis of similar attributes of vegetation, temperature, humidity, rainfall, and seasonal weather changes, by National Oceanic and Atmospheric Administration (NOAA). Our previous research on Texas climatology illustrated intensified precipitation regimes, which tend to result in extreme flood events. Such events have caused huge losses of lives and infrastructure in the Brazos River basin. Therefore, a region-specific investigation is required for analyzing precipitation regimes along the geographically-diverse river network. Owing to the topographical and hydroclimatological variations along the flow network, 24-hour Probable Maximum Precipitation (PMP) was estimated for different hydrologic units along the river network, using the revised Hershfield's method devised by Lan et al. (2017). The method incorporates the use of a standardized variable describing the maximum deviation from the average of a sample scaled by the standard deviation of the sample. The hydrometeorological literature identifies this method as more reasonable and consistent with the frequency equation. With respect to the calculation of stable data size required for statistically reliable results, this study also quantified the respective uncertainty associated with PMP values in different hydrologic units. The corresponding range of return periods of PMPs in different hydrologic units was
Novel maximum-margin training algorithms for supervised neural networks.
Ludwig, Oswaldo; Nunes, Urbano
2010-06-01
This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by
Relative merits of size, field, and current on ignited tokamak performance
International Nuclear Information System (INIS)
Uckan, N.A.
1988-01-01
A simple global analysis is developed to examine the relative merits of size (L = a or R/sub 0 /), field (B/sub 0 /), and current (I) on ignition regimes of tokamaks under various confinement scaling laws. Scalings of key parameters with L, B/sub 0 /, and I are presented at several operating points, including (a) optimal path to ignition (saddle point), (b) ignition at minimum beta, (c) ignition at 10 keV, and (d) maximum performance at the limits of density and beta. Expressions for the saddle point and the minimum conditions needed for ohmic ignition are derived analytically for any confinement model of the form tau/sub E/ ∼ n/sup x/T/sup y/. For a wide range of confinement models, the ''figure of merit'' parameters and I are found to give a good indication of the relative performance of the devices where q* is the cylindrical safety factor. As an illustration, the results are applied to representative ''CIT'' (as a class of compact, high-field ignition tokamaks) and ''Super-JETs'' [a class of large-size (few x JET), low-field, high-current (≥20-MA) devices.
Evolution of the earliest horses driven by climate change in the Paleocene-Eocene Thermal Maximum.
Secord, Ross; Bloch, Jonathan I; Chester, Stephen G B; Boyer, Doug M; Wood, Aaron R; Wing, Scott L; Kraus, Mary J; McInerney, Francesca A; Krigbaum, John
2012-02-24
Body size plays a critical role in mammalian ecology and physiology. Previous research has shown that many mammals became smaller during the Paleocene-Eocene Thermal Maximum (PETM), but the timing and magnitude of that change relative to climate change have been unclear. A high-resolution record of continental climate and equid body size change shows a directional size decrease of ~30% over the first ~130,000 years of the PETM, followed by a ~76% increase in the recovery phase of the PETM. These size changes are negatively correlated with temperature inferred from oxygen isotopes in mammal teeth and were probably driven by shifts in temperature and possibly high atmospheric CO(2) concentrations. These findings could be important for understanding mammalian evolutionary responses to future global warming.
Directory of Open Access Journals (Sweden)
Ning-Cong Xiao
2013-12-01
Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.
Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo
Cheong, R. Y.; Gabda, D.
2017-09-01
Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.
Maximum likelihood pixel labeling using a spatially variant finite mixture model
International Nuclear Information System (INIS)
Gopal, S.S.; Hebert, T.J.
1996-01-01
We propose a spatially-variant mixture model for pixel labeling. Based on this spatially-variant mixture model we derive an expectation maximization algorithm for maximum likelihood estimation of the pixel labels. While most algorithms using mixture models entail the subsequent use of a Bayes classifier for pixel labeling, the proposed algorithm yields maximum likelihood estimates of the labels themselves and results in unambiguous pixel labels. The proposed algorithm is fast, robust, easy to implement, flexible in that it can be applied to any arbitrary image data where the number of classes is known and, most importantly, obviates the need for an explicit labeling rule. The algorithm is evaluated both quantitatively and qualitatively on simulated data and on clinical magnetic resonance images of the human brain
Optimal control of a double integrator a primer on maximum principle
Locatelli, Arturo
2017-01-01
This book provides an introductory yet rigorous treatment of Pontryagin’s Maximum Principle and its application to optimal control problems when simple and complex constraints act on state and control variables, the two classes of variable in such problems. The achievements resulting from first-order variational methods are illustrated with reference to a large number of problems that, almost universally, relate to a particular second-order, linear and time-invariant dynamical system, referred to as the double integrator. The book is ideal for students who have some knowledge of the basics of system and control theory and possess the calculus background typically taught in undergraduate curricula in engineering. Optimal control theory, of which the Maximum Principle must be considered a cornerstone, has been very popular ever since the late 1950s. However, the possibly excessive initial enthusiasm engendered by its perceived capability to solve any kind of problem gave way to its equally unjustified rejecti...
International Nuclear Information System (INIS)
Croce, R P; Demma, Th; Longo, M; Marano, S; Matta, V; Pierro, V; Pinto, I M
2003-01-01
The cumulative distribution of the supremum of a set (bank) of correlators is investigated in the context of maximum likelihood detection of gravitational wave chirps from coalescing binaries with unknown parameters. Accurate (lower-bound) approximants are introduced based on a suitable generalization of previous results by Mohanty. Asymptotic properties (in the limit where the number of correlators goes to infinity) are highlighted. The validity of numerical simulations made on small-size banks is extended to banks of any size, via a Gaussian correlation inequality
Reconciling Virtual Classes with Genericity
DEFF Research Database (Denmark)
Ernst, Erik
2006-01-01
is functional abstraction, yielding more precise knowledge about the outcome; the prime ex- ample is type parameterized classes. This paper argues that they should be clearly separated to work optimally. We have applied this design philosophy to a lan- guage based on an extension mechanism, namely virtual...... classes. As a result, a kind of type parameters have been introduced, but they are simple and only used where they excel. Conversely, final definitions of virtual classes have been re- moved from the language, thus making virtual classes more flexible. The result- ing language presents a clearer and more...
Site Specific Probable Maximum Precipitation Estimates and Professional Judgement
Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.
2015-12-01
State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially
International Nuclear Information System (INIS)
Hutchinson, Thomas H.; Boegi, Christian; Winter, Matthew J.; Owens, J. Willie
2009-01-01
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic organisms and the
Lanjouw, P.; Ravallion, M.
1995-01-01
The widely held view that larger families tend to be poorer in developing countries has influenced research and policy. The scope for size economies in consumption cautions against this view. The authors find that the correlation between poverty and size vanishes in Pakistan when the size elasticity
Zwart, de B.A.M.
2013-01-01
To speak of the project for the mid-size city is to speculate about the possibility of mid-size urbanity as a design category. An urbanism not necessarily defined by the scale of the intervention or the size of the city undergoing transformation, but by the framing of the issues at hand and the
The mechanics of granitoid systems and maximum entropy production rates.
Hobbs, Bruce E; Ord, Alison
2010-01-13
A model for the formation of granitoid systems is developed involving melt production spatially below a rising isotherm that defines melt initiation. Production of the melt volumes necessary to form granitoid complexes within 10(4)-10(7) years demands control of the isotherm velocity by melt advection. This velocity is one control on the melt flux generated spatially just above the melt isotherm, which is the control valve for the behaviour of the complete granitoid system. Melt transport occurs in conduits initiated as sheets or tubes comprising melt inclusions arising from Gurson-Tvergaard constitutive behaviour. Such conduits appear as leucosomes parallel to lineations and foliations, and ductile and brittle dykes. The melt flux generated at the melt isotherm controls the position of the melt solidus isotherm and hence the physical height of the Transport/Emplacement Zone. A conduit width-selection process, driven by changes in melt viscosity and constitutive behaviour, operates within the Transport Zone to progressively increase the width of apertures upwards. Melt can also be driven horizontally by gradients in topography; these horizontal fluxes can be similar in magnitude to vertical fluxes. Fluxes induced by deformation can compete with both buoyancy and topographic-driven flow over all length scales and results locally in transient 'ponds' of melt. Pluton emplacement is controlled by the transition in constitutive behaviour of the melt/magma from elastic-viscous at high temperatures to elastic-plastic-viscous approaching the melt solidus enabling finite thickness plutons to develop. The system involves coupled feedback processes that grow at the expense of heat supplied to the system and compete with melt advection. The result is that limits are placed on the size and time scale of the system. Optimal characteristics of the system coincide with a state of maximum entropy production rate. This journal is © 2010 The Royal Society
Class Counts: Education, Inequality, and the Shrinking Middle Class
Ornstein, Allan
2007-01-01
Class differences and class warfare have existed since the beginning of western civilization, but the gap in income and wealth between the rich (top 10 percent) and the rest has increased steadily in the last twenty-five years. The U.S. is heading for a financial oligarchy much worse than the aristocratic old world that our Founding Fathers feared…
International Nuclear Information System (INIS)
Rocha, J.C. da.
1981-02-01
Pure, sinterized alumina and the optimization of the parameters of sinterization in order to obtain the highest mechanical resistence are discussed. Test materials are sinterized from a fine powder of pure alumina (Al 2 O 3 ), α phase, at different temperatures and times, in air. The microstructures are analysed concerning porosity and grain size. Depending on the temperature or the time of sinterization, there is a maximum for the mechanical resistence. (A.R.H.) [pt
Generalized uncertainty principle and the maximum mass of ideal white dwarfs
Energy Technology Data Exchange (ETDEWEB)
Rashidi, Reza, E-mail: reza.rashidi@srttu.edu
2016-11-15
The effects of a generalized uncertainty principle on the structure of an ideal white dwarf star is investigated. The equation describing the equilibrium configuration of the star is a generalized form of the Lane–Emden equation. It is proved that the star always has a finite size. It is then argued that the maximum mass of such an ideal white dwarf tends to infinity, as opposed to the conventional case where it has a finite value.
Field size and dose distribution of electron beam
International Nuclear Information System (INIS)
Kang, Wee Saing
1980-01-01
The author concerns some relations between the field size and dose distribution of electron beams. The doses of electron beams are measured by either an ion chamber with an electrometer or by film for dosimetry. We analyzes qualitatively some relations; the energy of incident electron beams and depths of maximum dose, field sizes of electron beams and depth of maximum dose, field size and scatter factor, electron energy and scatter factor, collimator shape and scatter factor, electron energy and surface dose, field size and surface dose, field size and central axis depth dose, and field size and practical range. He meets with some results. They are that the field size of electron beam has influence on the depth of maximum dose, scatter factor, surface dose and central axis depth dose, scatter factor depends on the field size and energy of electron beam, and the shape of the collimator, and the depth of maximum dose and the surface dose depend on the energy of electron beam, but the practical range of electron beam is independent of field size
Directory of Open Access Journals (Sweden)
Scott Horton
2012-01-01
Full Text Available Treatment of bipolar disorder with lithium therapy during pregnancy is a medical challenge. Bipolar disorder is more prevalent in women and its onset is often concurrent with peak reproductive age. Treatment typically involves administration of the element lithium, which has been classified as a class D drug (legal to use during pregnancy, but may cause birth defects and is one of only thirty known teratogenic drugs. There is no clear recommendation in the literature on the maximum acceptable dosage regimen for pregnant, bipolar women. We recommend a maximum dosage regimen based on a physiologically based pharmacokinetic (PBPK model. The model simulates the concentration of lithium in the organs and tissues of a pregnant woman and her fetus. First, we modeled time-dependent lithium concentration profiles resulting from lithium therapy known to have caused birth defects. Next, we identified maximum and average fetal lithium concentrations during treatment. Then, we developed a lithium therapy regimen to maximize the concentration of lithium in the mother’s brain, while maintaining the fetal concentration low enough to reduce the risk of birth defects. This maximum dosage regimen suggested by the model was 400 mg lithium three times per day.
Microprocessor Controlled Maximum Power Point Tracker for Photovoltaic Application
International Nuclear Information System (INIS)
Jiya, J. D.; Tahirou, G.
2002-01-01
This paper presents a microprocessor controlled maximum power point tracker for photovoltaic module. Input current and voltage are measured and multiplied within the microprocessor, which contains an algorithm to seek the maximum power point. The duly cycle of the DC-DC converter, at which the maximum power occurs is obtained, noted and adjusted. The microprocessor constantly seeks for improvement of obtained power by varying the duty cycle
Size-based predictions of food web patterns
DEFF Research Database (Denmark)
Zhang, Lai; Hartvig, Martin; Knudsen, Kim
2014-01-01
We employ size-based theoretical arguments to derive simple analytic predictions of ecological patterns and properties of natural communities: size-spectrum exponent, maximum trophic level, and susceptibility to invasive species. The predictions are brought about by assuming that an infinite number...... of species are continuously distributed on a size-trait axis. It is, however, an open question whether such predictions are valid for a food web with a finite number of species embedded in a network structure. We address this question by comparing the size-based predictions to results from dynamic food web...... simulations with varying species richness. To this end, we develop a new size- and trait-based food web model that can be simplified into an analytically solvable size-based model. We confirm existing solutions for the size distribution and derive novel predictions for maximum trophic level and invasion...
Directory of Open Access Journals (Sweden)
Pentti Kujala
2018-05-01
Full Text Available Selection of suitable ice class for ships operation is an important but not simple task. The increased exploitation of the Polar waters, both seasonal periods and geographical areas, as well as the introduction of new international design standards such as Polar Code, reduces the relevancy of using existing experience as basis for the selection, and new methods and knowledge have to be developed. This paper will analyse what can be the limiting ice thickness for ships navigating in the Russian Arctic and designed according to the Finnish-Swedish ice class rules. The permanent deformations of ice-strengthened shell structures for various ice classes is determined using MT Uikku as the typical size of a vessel navigating in ice. The ice load in various conditions is determined using the ARCDEV data from the winter 1998 as the basic database. By comparing the measured load in various ice conditions with the serviceability limit state of the structures, the limiting ice thickness for various ice classes is determined. The database for maximum loads includes 3-weeks ice load measurements during April 1998 on the Kara Sea mainly by icebreaker assistance. Gumbel 1 distribution is fitted on the measured 20 min maximum values and the data is divided into various classes using ship speed, ice thickness and ice concentration as the main parameters. Results encouragingly show that present designs are safer than assumed in the Polar Code suggesting that assisted operation in Arctic conditions is feasible in rougher conditions than indicated in the Polar Code. Keywords: Loads, Serviceability, Limit ice thickness, Polar code
Type Families with Class, Type Classes with Family
DEFF Research Database (Denmark)
Serrano, Alejandro; Hage, Jurriaan; Bahr, Patrick
2015-01-01
Type classes and type families are key ingredients in Haskell programming. Type classes were introduced to deal with ad-hoc polymorphism, although with the introduction of functional dependencies, their use expanded to type-level programming. Type families also allow encoding type-level functions......, now as rewrite rules. This paper looks at the interplay of type classes and type families, and how to deal with shortcomings in both of them. Furthermore, we show how to use families to simulate classes at the type level. However, type families alone are not enough for simulating a central feature...... of type classes: elaboration, that is, generating code from the derivation of a rewriting. We look at ways to solve this problem in current Haskell, and propose an extension to allow elaboration during the rewriting phase....
Subaltern Classes, Class Struggles and Hegemony : a Gramscian Approach
Directory of Open Access Journals (Sweden)
Ivete Simionatto
2009-01-01
Full Text Available This article sought to revive the concept of subaltern classes and their relation with other categories, particularly the State, civil society and hegemony in the thinking of Antonio Gramsci, as a support for contemporary class struggles. It also analyzes the relations between subaltern classes, common sense and ideology, as well as the forms of “overcoming” conceptualized by Gramsci, through the culture and philosophy of praxis. The paper revives the discussion of the subaltern classes, based on the original Gramscian formulation in the realm of Marxism, through the dialectic interaction between structure and superstructure, economy and politics. In addition to the conceptual revival, it indicates some elements that can support the discussion of the forms of subalternity found in contemporary reality and the possibilities for strengthening the struggles of these class layers, above all in moments of strong demobilization of popular participation.
Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir
2011-01-01
Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353
Vessel size measurements in angiograms: Manual measurements
International Nuclear Information System (INIS)
Hoffmann, Kenneth R.; Dmochowski, Jacek; Nazareth, Daryl P.; Miskolczi, Laszlo; Nemes, Balazs; Gopal, Anant; Wang Zhou; Rudin, Stephen; Bednarek, Daniel R.
2003-01-01
Vessel size measurement is perhaps the most often performed quantitative analysis in diagnostic and interventional angiography. Although automated vessel sizing techniques are generally considered to have good accuracy and precision, we have observed that clinicians rarely use these techniques in standard clinical practice, choosing to indicate the edges of vessels and catheters to determine sizes and calibrate magnifications, i.e., manual measurements. Thus, we undertook an investigation of the accuracy and precision of vessel sizes calculated from manually indicated edges of vessels. Manual measurements were performed by three neuroradiologists and three physicists. Vessel sizes ranged from 0.1-3.0 mm in simulation studies and 0.3-6.4 mm in phantom studies. Simulation resolution functions had full-widths-at-half-maximum (FWHM) ranging from 0.0 to 0.5 mm. Phantom studies were performed with 4.5 in., 6 in., 9 in., and 12 in. image intensifier modes, magnification factor = 1, with and without zooming. The accuracy and reproducibility of the measurements ranged from 0.1 to 0.2 mm, depending on vessel size, resolution, and pixel size, and zoom. These results indicate that manual measurements may have accuracies comparable to automated techniques for vessels with sizes greater than 1 mm, but that automated techniques which take into account the resolution function should be used for vessels with sizes smaller than 1 mm
Context-sensitive intra-class clustering
Yu, Yingwei; Gutierrez-Osuna, Ricardo; Choe, Yoonsuck
2014-01-01
This paper describes a new semi-supervised learning algorithm for intra-class clustering (ICC). ICC partitions each class into sub-classes in order to minimize overlap across clusters from different classes. This is achieved by allowing partitioning
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
Directory of Open Access Journals (Sweden)
S. Buenrostro Mazon
2009-01-01
Full Text Available Studies of secondary aerosol-particle formation depend on identifying days in which new particle formation occurs and, by comparing them to days with no signs of particle formation, identifying the conditions favourable for formation. Continuous aerosol size distribution data has been collected at the SMEAR II station in a boreal forest in Hyytiälä, Finland, since 1996, making it the longest time series of aerosol size distributions available worldwide. In previous studies, the data have been classified as particle-formation event, nonevent, and undefined days, with almost 40% of the dataset classified as undefined. In the present study, eleven years (1996–2006 of undefined days (1630 days were reanalyzed and subdivided into three new classes: failed events (37% of all previously undefined days, ultrafine-mode concentration peaks (34%, and pollution-related concentration peaks (19%. Unclassified days (10% comprised the rest of the previously undefined days. The failed events were further subdivided into tail events (21%, where a tail of a formation event presumed to be advected to Hyytiälä from elsewhere, and quasi events (16% where new particles appeared at sizes 3–10 nm, but showed unclear growth, the mode persisted for less than an hour, or both. The ultrafine concentration peaks days were further subdivided into nucleation-mode peaks (24% and Aitken-mode peaks (10%, depending on the size range where the particles occurred. The mean annual distribution of the failed events has a maximum during summer, whereas the two peak classes have maxima during winter. The summer minimum previously found in the seasonal distribution of event days partially offsets a summer maximum in failed-event days. Daily-mean relative humidity and condensation sink values are useful in discriminating the new classes from each other. Specifically, event days had low values of relative humidity and condensation sink relative to nonevent days. Failed-event days
On uniqueness of characteristic classes
DEFF Research Database (Denmark)
Feliu, Elisenda
2011-01-01
We give an axiomatic characterization of maps from algebraic K-theory. The results apply to a large class of maps from algebraic K-theory to any suitable cohomology theory or to algebraic K-theory. In particular, we obtain comparison theorems for the Chern character and Chern classes and for the ...
Perez, Angel B.
2016-01-01
Colleges and universities have a significant role to play in shaping the future of race and class relations in America. As exhibited in this year's presidential election, race and class continue to divide. Black Lives Matter movements, campus protests, and police shootings are just a few examples of the proliferation of intolerance, and higher…
Propagating Class and Method Combination
DEFF Research Database (Denmark)
Ernst, Erik
1999-01-01
number of implicit combinations. For example, it is possible to specify separate aspects of a family of classes, and then combine several aspects into a full-fledged class family. The combination expressions would explicitly combine whole-family aspects, and by propagation implicitly combine the aspects...
Social Class and the Extracurriculum
Barratt, Will
2012-01-01
Social class is a powerful and often unrecognized influence on student participation in the extracurriculum. Spontaneous student-created extracurricular experiences depend on students affiliating and interacting with each other; student social class is a powerful influence on student affiliations. Students tend to exercise consciousness of kind-…
Translanguaging in a Reading Class
Vaish, Viniti; Subhan, Aidil
2015-01-01
Using translanguaging as a theoretical foundation, this paper analyses findings from a Grade 2 reading class for low achieving students, where Malay was used as a scaffold to teach English. Data come from one class in one school in Singapore and its Learning Support Programme (LSP), which is part of a larger research project on biliteracy. The LSP…
Netten, Joan W., Ed.
1984-01-01
A collection of ideas for class activities in elementary and secondary language classes includes a vocabulary review exercise and games of memory, counting, vocabulary, flashcard tic-tac-toe, dice, trashcans, questioning, and spelling. Some are designed specifically for French. (MSE)
Possible new class of dense white dwarfs
International Nuclear Information System (INIS)
Glendenning, N.K.; Kettner, C.; Weber, F.
1995-01-01
If the strange quark matter hypothesis is true, then a new class of white dwarfs can exist whose nuclear material in their deep interiors can have a density as high as the neutron drip density, a few hundred times the density in maximum-mass white dwarfs and 4x10 4 the density in dwarfs of mass, M∼0.6 M circle-dot . Their masses fall in the approximate range 10 -4 to 1 M circle-dot . They are stable against acoustical modes of vibration. A strange quark core stabilizes these stars, which otherwise would have central densities that would place them in the unstable region of the sequence between white dwarfs and neutron stars. copyright 1995 American Institute of Physics
Step Sizes for Strong Stability Preservation with Downwind-Biased Operators
Ketcheson, David I.
2011-01-01
order accuracy. It is possible to achieve more relaxed step size restrictions in the discretization of hyperbolic PDEs through the use of both upwind- and downwind-biased semidiscretizations. We investigate bounds on the maximum SSP step size for methods
Statistical Inference on the Canadian Middle Class
Directory of Open Access Journals (Sweden)
Russell Davidson
2018-03-01
Full Text Available Conventional wisdom says that the middle classes in many developed countries have recently suffered losses, in terms of both the share of the total population belonging to the middle class, and also their share in total income. Here, distribution-free methods are developed for inference on these shares, by means of deriving expressions for their asymptotic variances of sample estimates, and the covariance of the estimates. Asymptotic inference can be undertaken based on asymptotic normality. Bootstrap inference can be expected to be more reliable, and appropriate bootstrap procedures are proposed. As an illustration, samples of individual earnings drawn from Canadian census data are used to test various hypotheses about the middle-class shares, and confidence intervals for them are computed. It is found that, for the earlier censuses, sample sizes are large enough for asymptotic and bootstrap inference to be almost identical, but that, in the twenty-first century, the bootstrap fails on account of a strange phenomenon whereby many presumably different incomes in the data are rounded to one and the same value. Another difference between the centuries is the appearance of heavy right-hand tails in the income distributions of both men and women.
49 CFR 195.406 - Maximum operating pressure.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...