Knapczyk, Frances N; Conner, Jeffrey K
2007-10-01
Kingsolver et al.'s review of phenotypic selection gradients from natural populations provided a glimpse of the form and strength of selection in nature and how selection on different organisms and traits varies. Because this review's underlying database could be a key tool for answering fundamental questions concerning natural selection, it has spawned discussion of potential biases inherent in the review process. Here, we explicitly test for two commonly discussed sources of bias: sampling error and publication bias. We model the relationship between variance among selection gradients and sample size that sampling error produces by subsampling large empirical data sets containing measurements of traits and fitness. We find that this relationship was not mimicked by the review data set and therefore conclude that sampling error does not bias estimations of the average strength of selection. Using graphical tests, we find evidence for bias against publishing weak estimates of selection only among very small studies (N<38). However, this evidence is counteracted by excess weak estimates in larger studies. Thus, estimates of average strength of selection from the review are less biased than is often assumed. Devising and conducting straightforward tests for different biases allows concern to be focused on the most troublesome factors.
Yang, Ziheng; Nielsen, Rasmus
2008-01-01
Current models of codon substitution are formulated at the levels of nucleotide substitution and do not explicitly consider the separate effects of mutation and selection. They are thus incapable of inferring whether mutation or selection is responsible for evolution at silent sites. Here we...... to examine the null hypothesis that codon usage is due to mutation bias alone, not influenced by natural selection. Application of the test to the mammalian data led to rejection of the null hypothesis in most genes, suggesting that natural selection may be a driving force in the evolution of synonymous...... codon usage in mammals. Estimates of selection coefficients nevertheless suggest that selection on codon usage is weak and most mutations are nearly neutral. The sensitivity of the analysis on the assumed mutation model is discussed....
The Strength of Selection against Neanderthal Introgression.
Juric, Ivan; Aeschbacher, Simon; Coop, Graham
2016-11-01
Hybridization between humans and Neanderthals has resulted in a low level of Neanderthal ancestry scattered across the genomes of many modern-day humans. After hybridization, on average, selection appears to have removed Neanderthal alleles from the human population. Quantifying the strength and causes of this selection against Neanderthal ancestry is key to understanding our relationship to Neanderthals and, more broadly, how populations remain distinct after secondary contact. Here, we develop a novel method for estimating the genome-wide average strength of selection and the density of selected sites using estimates of Neanderthal allele frequency along the genomes of modern-day humans. We confirm that East Asians had somewhat higher initial levels of Neanderthal ancestry than Europeans even after accounting for selection. We find that the bulk of purifying selection against Neanderthal ancestry is best understood as acting on many weakly deleterious alleles. We propose that the majority of these alleles were effectively neutral-and segregating at high frequency-in Neanderthals, but became selected against after entering human populations of much larger effective size. While individually of small effect, these alleles potentially imposed a heavy genetic load on the early-generation human-Neanderthal hybrids. This work suggests that differences in effective population size may play a far more important role in shaping levels of introgression than previously thought.
Annotation of selection strengths in viral genomes
McCauley, Stephen; de Groot, Saskia; Mailund, Thomas
2007-01-01
Motivation: Viral genomes tend to code in overlapping reading frames to maximize information content. This may result in atypical codon bias and particular evolutionary constraints. Due to the fast mutation rate of viruses, there is additional strong evidence for varying selection between intra......- and intergenomic regions. The presence of multiple coding regions complicates the concept of Ka/Ks ratio, and thus begs for an alternative approach when investigating selection strengths. Building on the paper by McCauley & Hein (2006), we develop a method for annotating a viral genome coding in overlapping...... may thus achieve an annotation both of coding regions as well as selection strengths, allowing us to investigate different selection patterns and hypotheses. Results: We illustrate our method by applying it to a multiple alignment of four HIV2 sequences, as well as four Hepatitis B sequences. We...
Virtual estimates of fastening strength for pedicle screw implantation procedures
Linte, Cristian A.; Camp, Jon J.; Augustine, Kurt E.; Huddleston, Paul M.; Robb, Richard A.; Holmes, David R.
2014-03-01
Traditional 2D images provide limited use for accurate planning of spine interventions, mainly due to the complex 3D anatomy of the spine and close proximity of nerve bundles and vascular structures that must be avoided during the procedure. Our previously developed clinician-friendly platform for spine surgery planning takes advantage of 3D pre-operative images, to enable oblique reformatting and 3D rendering of individual or multiple vertebrae, interactive templating, and placement of virtual pedicle implants. Here we extend the capabilities of the planning platform and demonstrate how the virtual templating approach not only assists with the selection of the optimal implant size and trajectory, but can also be augmented to provide surrogate estimates of the fastening strength of the implanted pedicle screws based on implant dimension and bone mineral density of the displaced bone substrate. According to the failure theories, each screw withstands a maximum holding power that is directly proportional to the screw diameter (D), the length of the in-bone segm,ent of the screw (L), and the density (i.e., bone mineral density) of the pedicle body. In this application, voxel intensity is used as a surrogate measure of the bone mineral density (BMD) of the pedicle body segment displaced by the screw. We conducted an initial assessment of the developed platform using retrospective pre- and post-operative clinical 3D CT data from four patients who underwent spine surgery, consisting of a total of 26 pedicle screws implanted in the lumbar spine. The Fastening Strength of the planned implants was directly assessed by estimating the intensity - area product across the pedicle volume displaced by the virtually implanted screw. For post-operative assessment, each vertebra was registered to its homologous counterpart in the pre-operative image using an intensity-based rigid registration followed by manual adjustment. Following registration, the Fastening Strength was computed
Biotic interaction strength and the intensity of selection.
Benkman, Craig W
2013-08-01
Although the ecological and evolutionary impacts of species interactions have been the foci of much research, the relationship between the strength of species interactions and the intensity of selection has been investigated only rarely. I develop a simple model demonstrating how the opportunity for selection varies with interaction strength, and then use the relationship between the maximum value of the selection differential and the opportunity for selection (Arnold & Wade 1984) to evaluate how selection differentials vary in relation to species interaction strength. This model predicts an initial deceleration and then an accelerating increase in the intensity of selection with increasing strength of antagonistic interactions and with decreasing strength of mutualistic interactions. Empirical data from several studies provide support for this model. These results further support an evolutionary mechanism for some striking patterns of evolutionary diversification including the latitudinal species gradient, and should be relevant to studies of eco-evolutionary dynamics.
Tensile rock mass strength estimated using InSAR
Jonsson, Sigurjon
2012-11-01
The large-scale strength of rock is known to be lower than the strength determined from small-scale samples in the laboratory. However, it is not well known how strength scales with sample size. I estimate kilometer-scale tensional rock mass strength by measuring offsets across new tensional fractures (joints), formed above a shallow magmatic dike intrusion in western Arabia in 2009. I use satellite radar observations to derive 3D ground displacements and by quantifying the extension accommodated by the joints and the maximum extension that did not result in a fracture, I put bounds on the joint initiation threshold of the surface rocks. The results indicate that the kilometer-scale tensile strength of the granitic rock mass is 1–3 MPa, almost an order of magnitude lower than typical laboratory values.
Estimate of Coronal Magnetic Field Strength Using Plasmoid Acceleration Measurement
Choe, G.; Lee, K.; Jang, M.
2010-12-01
A method of estimating the lower bound of coronal magnetic field strength in the neighborhood of an ejecting plasmoid is presented. Based on the assumption that the plasma ejecta is within a magnetic island, an analytical expression for the force acting on the ejecta is derived. A rather simple calculation shows that the vertical force acting on a cylinder-like volume, whose lateral surface is a flux surface and whose magnetic axis is parallel to the horizontal, is just the difference in total pressure (magnetic pressure plus plasma pressure) below and above the volume. The method is applied to a limb coronal mass ejection event, and a lower bound of the magnetic field strength just below the CME core is estimated. The method is expected to provide useful information on the strength of reconnecting magnetic field if applied to X-ray plasma ejecta.
A new algorithm for estimating gillnet selectivity
唐衍力; 黄六一; 葛长字; 梁振林; 孙鹏
2010-01-01
The estimation of gear selectivity is a critical issue in fishery stock assessment and management.Several methods have been developed for estimating gillnet selectivity,but they all have their limitations,such as inappropriate objective function in data fitting,lack of unique estimates due to the difficulty in finding global minima in minimization,biased estimates due to outliers,and estimations of selectivity being influenced by the predetermined selectivity functions.In this study,we develop a new algorit...
In situ estimation of roof rock strength using sonic logging
Oyler, David C.; Mark, Christopher; Molinda, Gregory M. [NIOSH-Pittsburgh Research Laboratory, Pittsburgh, PA (United States)
2010-09-01
Sonic travel time logging of exploration boreholes is routinely used in Australia to obtain estimates of coal mine roof rock strength. Because sonic velocity logs are relatively inexpensive and easy to obtain during exploration, the technique has provided Australian underground coal mines with an abundance of rock strength data for use in all aspects of ground control design. However, the technique depends upon reliable correlations between the uniaxial compressive strength (UCS) and the sonic velocity. This paper describes research recently conducted by NIOSH aimed at developing a correlation for use by the U.S. mining industry. From two coreholes in Illinois, two from Pennsylvania, and one each from Colorado, western Kentucky and southern West Virginia, sonic velocity logs were compared with UCS values derived from Point Load tests for a broad range of coal measure rock types. For the entire data set, the relationship between UCS and sonic travel time is expressed by an exponential equation relating the UCS in psi to the travel time of the P-wave in {mu}s/ft. The coefficient of determination or R-squared for this equation is 0.72, indicating that a relatively high reliability can be achieved with this technique. The strength estimates obtained from the correlation equation may be used to help design roof support systems. The paper also addresses the steps that are necessary to ensure that high-quality sonic logs are obtained for use in estimating UCS. (author)
Strength Estimation of Die Cast Beams Considering Equivalent Porous Defects
Park, Moon Shik [Hannam Univ., Daejeon (Korea, Republic of)
2017-05-15
As a shop practice, a strength estimation method for die cast parts is suggested, in which various defects such as pores can be allowed. The equivalent porosity is evaluated by combining the stiffness data from a simple elastic test at the part level during the shop practice and the theoretical stiffness data, which are defect free. A porosity equation is derived from Eshelby's inclusion theory. Then, using the Mori-Tanaka method, the porosity value is used to draw a stress-strain curve for the porous material. In this paper, the Hollomon equation is used to capture the strain hardening effect. This stress-strain curve can be used to estimate the strength of a die cast part with porous defects. An elastoplastic theoretical solution is derived for the three-point bending of a die cast beam by using the plastic hinge method as a reference solution for a part with porous defects.
Adaptive link selection algorithms for distributed estimation
Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent
2015-12-01
This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.
Cement paste compressive strength estimation using nondestructive microwave reflectometry
Zoughi, Reza; Gray, S.; Nowak, Paul S.
1994-09-01
Microwave reflection properties of four cement paste samples with various water-cement (w/c) ratios were measured daily for 28 days using microwave frequencies of 5, 9, and 13 GHz. The dielectric properties of these samples, and hence their reflection coefficients, were measured daily and shown to decrease as a function of increasing w/c ratio. This is as a direct result of curing (no chemical interaction or hydration). The presence of curing as indicated by this result indicates that microwaves could be used to monitor the amount of curing in a concrete member. The variation in the reflection coefficient of these samples as a function of w/c ratio followed a trend similar to the variation of compressive strength as a function of w/c ratio. Subsequently, a correlation between the measured compressive strength and reflection coefficient of these blocks was obtained. The early results indicated that lower frequencies are more sensitive to compressive strength variations. However, further investigations showed that there may be a frequency around 5 GHz which is the optimum measurement frequency. This result can be used to directly and nondestructively estimate the compressive strength of a cement paste and mortar blocks.
EFFECTS OF SELF-SELECTED MUSIC ON MAXIMAL BENCH PRESS STRENGTH AND STRENGTH ENDURANCE.
Bartolomei, Sandro; Di Michele, Rocco; Merni, Franco
2015-06-01
Listening to music during strength workouts has become a very common practice. The goal of this study was to assess the effect of listening to self-selected music on strength performances. Thirty-one resistance-trained men (M age = 24.7 yr., SD = 5.9; M height = 178.7 cm, SD = 4.7; M body mass = 83.54 kg, SD = 12.0) were randomly assigned to either a Music group (n = 19) or to a Control group (n = 12). Both groups took part in two separate sessions; each session consisted in a maximal strength test (1-RM) and a strength-endurance test (repetitions to failure at 60% 1-RM) using the bench press exercise. The music group listened to music in the second assessment session, while the control group performed both tests without music. Listening to music induced a significant increase of strength endurance performance and no effects on maximal strength. These findings have implications for the use of music during strength workouts.
Optimized tuner selection for engine performance estimation
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
A new estimate of average dipole field strength for the last five million years
Cromwell, G.; Tauxe, L.; Halldorsson, S. A.
2013-12-01
The Earth's ancient magnetic field can be approximated by a geocentric axial dipole (GAD) where the average field intensity is twice as strong at the poles than at the equator. The present day geomagnetic field, and some global paleointensity datasets, support the GAD hypothesis with a virtual axial dipole moment (VADM) of about 80 ZAm2. Significant departures from GAD for 0-5 Ma are found in Antarctica and Iceland where paleointensity experiments on massive flows (Antarctica) (1) and volcanic glasses (Iceland) produce average VADM estimates of 41.4 ZAm2 and 59.5 ZAm2, respectively. These combined intensities are much closer to a lower estimate for long-term dipole field strength, 50 ZAm2 (2), and some other estimates of average VADM based on paleointensities strictly from volcanic glasses. Proposed explanations for the observed non-GAD behavior, from otherwise high-quality paleointensity results, include incomplete temporal sampling, effects from the tangent cylinder, and hemispheric asymmetry. Differences in estimates of average magnetic field strength likely arise from inconsistent selection protocols and experiment methodologies. We address these possible biases and estimate the average dipole field strength for the last five million years by compiling measurement level data of IZZI-modified paleointensity experiments from lava flows around the globe (including new results from Iceland and the HSDP-2 Hawaii drill core). We use the Thellier Gui paleointensity interpreter (3) in order to apply objective criteria to all specimens, ensuring consistency between sites. Specimen level selection criteria are determined from a recent paleointensity investigation of modern Hawaiian lava flows where the expected magnetic field strength was accurately recovered when following certain selection parameters. Our new estimate of average dipole field strength for the last five million years incorporates multiple paleointensity studies on lava flows with diverse global and
Selectivity estimation using compressed spatial information
JEONG Jae-hyuck; CHI Jeong-hee; RYU Keun-ho
2004-01-01
Spatial selectivity estimation is one of the essential studies to get query responses rapidly and accurately with the limitation of memory space. Currently, there exist several spatial selectivity estimation techniques such as random sampling, histogram, and parametric. Especially, Cumulative Density Histogram guarantees accurate estimation for rectangle object which has multiple-count problem. However, it requires large memory space because of retaining four sub-histograms for spatial data. Therefore in this paper,we propose a new technique Cumulative Density Wavelet Histogram, called CDWH, which is the combination of Cumulative Density Histogram and Haar Wavelet Transform, a compressed technique. The proposed method simultaneously takes full advantage of their strong points, high accuracy provided by the former and economization of memory space supported by the latter. Consequently, our technique is able to support estimates with relatively low error and retain similar estimates even if memory space is small.
Performance Optimization of Self-Piercing Rivets through Analytical Rivet Strength Estimation
Sun, Xin; Khaleel, Mohammad A.
2005-08-01
This paper presents the authors' work on strength optimization and failure mode prediction of self-piercing rivets (SPR) for automotive applications. The limit load-based strength estimator is used to estimate the static strength of an SPR under cross tension loading configuration. Failure modes associated with the estimated failure strength are also predicted. Experimental strength and failure mode observations are used to validate the model. It is shown that the strength of an SPR joint depends on the material and gage combinations, rivet design, die design and riveting direction. The rivet strength estimator is then used to optimize the rivet strength by comparing the measured rivet strength and failure mode with the predicted ones. Two illustrative examples are used in which rivet strength is optimized by changing rivet design and riveting direction from the original manufacturing parameters.
Strength Estimation of Self-Piercing Rivets using Lower Bound Limit Load Analysis
Sun, Xin; Khaleel, Mohammad A.
2005-08-01
This paper summarizes the authors' work on strength and failure mode estimation of self-piercing rivets (SPR) for automotive applications. First, the static cross tension strength of an SPR joint is estimated using a lower bound limit load based strength estimator. Failure mode associated with the predicted failure strength can also be identified. It is shown that the cross tension strength of an SPR joint depends on the material and gage combinations, rivet design, die design and riveting direction. The analytical rivet strength estimator is then validated by experimental rivet strength measurements and failure mode observations from nine SPR joint populations with various material and gage combinations. Next, the estimator is used to optimize rivet strength. Two illustrative examples are presented in which rivet strength is improved by changing rivet length and riveting direction from the original manufacturing parameters.
MODELS TO ESTIMATE BRAZILIAN INDIRECT TENSILE STRENGTH OF LIMESTONE IN SATURATED STATE
Zlatko Briševac
2016-06-01
Full Text Available There are a number of methods of estimating physical and mechanical characteristics. Principally, the most widely used is the regression, but recently the more sophisticated methods such as neural networks has frequently been applied, as well. This paper presents the models of a simple and a multiple regression and the neural networks – types Radial Basis Function and Multiple Layer Perceptron, which can be used for the estimate of the Brazilian indirect tensile strength in saturated conditions. The paper includes the issues of collecting the data for the analysis and modelling and the overview of the performed analysis of the efficacy assessment of the estimate of each model. After the assessment, the model which provides the best estimate was selected, including the model which could have the most wide-spread application in the engineering practice.
A signal strength priority based position estimation for mobile platforms
Kalgikar, Bhargav; Akopian, David; Chen, Philip
2010-01-01
Global Positioning System (GPS) products help to navigate while driving, hiking, boating, and flying. GPS uses a combination of orbiting satellites to determine position coordinates. This works great in most outdoor areas, but the satellite signals are not strong enough to penetrate inside most indoor environments. As a result, a new strain of indoor positioning technologies that make use of 802.11 wireless LANs (WLAN) is beginning to appear on the market. In WLAN positioning the system either monitors propagation delays between wireless access points and wireless device users to apply trilateration techniques or it maintains the database of location-specific signal fingerprints which is used to identify the most likely match of incoming signal data with those preliminary surveyed and saved in the database. In this paper we investigate the issue of deploying WLAN positioning software on mobile platforms with typically limited computational resources. We suggest a novel received signal strength rank order based location estimation system to reduce computational loads with a robust performance. The proposed system performance is compared to conventional approaches.
Santos, Tatiana B; Lana, Milene S; Santos, Allan E M; Silveira, Larissa R C
2017-01-01
Many authors have been proposed several correlation equations between geomechanical classifications and strength parameters. However, these correlation equations have been based in rock masses with different characteristics when compared to Brazilian rock masses. This paper aims to study the applicability of the geomechanical classifications to obtain strength parameters of three Brazilian rock masses. Four classification systems have been used; the Rock Mass Rating (RMR), the Rock Mass Quality (Q), the Geological Strength Index (GSI) and the Rock Mass Index (RMi). A strong rock mass and two soft rock masses with different degrees of weathering located in the cities of Ouro Preto and Mariana, Brazil; were selected for the study. Correlation equations were used to estimate the strength properties of these rock masses. However, such correlations do not always provide compatible results with the rock mass behavior. For the calibration of the strength values obtained through the use of classification systems, stability analyses of failures in these rock masses have been done. After calibration of these parameters, the applicability of the various correlation equations found in the literature have been discussed. According to the results presented in this paper, some of these equations are not suitable for the studied rock masses.
Strength and tempo of selection revealed in viral gene genealogies.
Bedford, Trevor; Cobey, Sarah; Pascual, Mercedes
2011-07-25
RNA viruses evolve extremely quickly, allowing them to rapidly adapt to new environmental conditions. Viral pathogens, such as influenza virus, exploit this capacity for evolutionary change to persist within the human population despite substantial immune pressure. Understanding the process of adaptation in these viral systems is essential to our efforts to combat infectious disease. Through analysis of simulated populations and sequence data from influenza A (H3N2) and measles virus, we show how phylogenetic and population genetic techniques can be used to assess the strength and temporal pattern of adaptive evolution. The action of natural selection affects the shape of the genealogical tree connecting members of an evolving population, causing deviations from the neutral expectation. The magnitude and distribution of these deviations lends insight into the historical pattern of evolution and adaptation in the viral population. We quantify the degree of ongoing adaptation in influenza and measles virus through comparison of census population size and effective population size inferred from genealogical patterns, finding a 60-fold greater deviation in influenza than in measles. We also examine the tempo of adaptation in influenza, finding evidence for both continuous and episodic change. Our results have important consequences for understanding the epidemiological and evolutionary dynamics of the influenza virus. Additionally, these general techniques may prove useful to assess the strength and pattern of adaptive evolution in a variety of evolving systems. They are especially powerful when assessing selection in fast-evolving populations, where temporal patterns become highly visible.
Strength and tempo of selection revealed in viral gene genealogies
Cobey Sarah
2011-07-01
Full Text Available Abstract Background RNA viruses evolve extremely quickly, allowing them to rapidly adapt to new environmental conditions. Viral pathogens, such as influenza virus, exploit this capacity for evolutionary change to persist within the human population despite substantial immune pressure. Understanding the process of adaptation in these viral systems is essential to our efforts to combat infectious disease. Results Through analysis of simulated populations and sequence data from influenza A (H3N2 and measles virus, we show how phylogenetic and population genetic techniques can be used to assess the strength and temporal pattern of adaptive evolution. The action of natural selection affects the shape of the genealogical tree connecting members of an evolving population, causing deviations from the neutral expectation. The magnitude and distribution of these deviations lends insight into the historical pattern of evolution and adaptation in the viral population. We quantify the degree of ongoing adaptation in influenza and measles virus through comparison of census population size and effective population size inferred from genealogical patterns, finding a 60-fold greater deviation in influenza than in measles. We also examine the tempo of adaptation in influenza, finding evidence for both continuous and episodic change. Conclusions Our results have important consequences for understanding the epidemiological and evolutionary dynamics of the influenza virus. Additionally, these general techniques may prove useful to assess the strength and pattern of adaptive evolution in a variety of evolving systems. They are especially powerful when assessing selection in fast-evolving populations, where temporal patterns become highly visible.
Adaptive Covariance Estimation with model selection
Biscay, Rolando; Loubes, Jean-Michel
2012-01-01
We provide in this paper a fully adaptive penalized procedure to select a covariance among a collection of models observing i.i.d replications of the process at fixed observation points. For this we generalize previous results of Bigot and al. and propose to use a data driven penalty to obtain an oracle inequality for the estimator. We prove that this method is an extension to the matricial regression model of the work by Baraud.
Estimation and variable selection with exponential weights
Arias-Castro, Ery; Lounici, Karim
2014-01-01
In the context of a linear model with a sparse coefficient vector, exponential weights methods have been shown to be achieve oracle inequalities for denoising/prediction. We show that such methods also succeed at variable selection and estimation under the near minimum condition on the design matrix, instead of much stronger assumptions required by other methods such as the Lasso or the Dantzig Selector. The same analysis yields consistency results for Bayesian methods and BIC-type variable s...
estimation of shear strength parameters of lateritic soils using ...
user
strength of soils varies linearly with the applied stress through two .... and angle of friction were the single output variables in the various .... approximate any complex nonlinear function [36, 37]. Therefore, in this .... Computational approach to ...
Efficiently adapting graphical models for selectivity estimation
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
of the selectivities of the constituent predicates. However, this independence assumption is more often than not wrong, and is considered to be the most common cause of sub-optimal query execution plans chosen by modern query optimizers. We take a step towards a principled and practical approach to performing...... cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss......Query optimizers rely on statistical models that succinctly describe the underlying data. Models are used to derive cardinality estimates for intermediate relations, which in turn guide the optimizer to choose the best query execution plan. The quality of the resulting plan is highly dependent...
Differential strengths of selection on S-RNases from Physalis and Solanum (Solanaceae
Kohn Joshua R
2011-08-01
Full Text Available Abstract Background The S-RNases of the Solanaceae are highly polymorphic self-incompatibility (S- alleles subject to strong balancing selection. Relatively recent diversification of S-alleles has occurred in the genus Physalis following a historical restriction of S-allele diversity. In contrast, the genus Solanum did not undergo a restriction of S-locus diversity and its S-alleles are generally much older. Because recovery from reduced S-locus diversity should involve increased selection, we employ a statistical framework to ask whether S-locus selection intensities are higher in Physalis than Solanum. Because different S-RNase lineages diversify in Physalis and Solanum, we also ask whether different sites are under selection in different lineages. Results Maximum-likelihood and Bayesian coalescent methods found higher intensities of selection and more sites under significant positive selection in the 48 Physalis S-RNase alleles than the 49 from Solanum. Highest posterior densities of dN/dS (ω estimates show that the strength of selection is greater for Physalis at 36 codons. A nested maximum likelihood method was more conservative, but still found 16 sites with greater selection in Physalis. Neither method found any codons under significantly greater selection in Solanum. A random effects likelihood method that examines data from both taxa jointly confirmed higher selection intensities in Physalis, but did not find different proportions of sites under selection in the two datasets. The greatest differences in strengths of selection were found in the most variable regions of the S-RNases, as expected if these regions encode self-recognition specificities. Clade-specific likelihood models indicated some codons were under greater selection in background Solanum lineages than in specific lineages of Physalis implying that selection on sites may differ among lineages. Conclusions Likelihood and Bayesian methods provide a statistical approach to
Estimation of concrete compressive strength using artificial neural network
Kostić, Srđan; Vasović, Dejan
2015-01-01
In present paper, concrete compressive strength is evaluated using back propagation feed-forward artificial neural network. Training of neural network is performed using Levenberg-Marquardt learning algorithm for four architectures of artificial neural networks, one, three, eight and twelve nodes in a hidden layer in order to avoid the occurrence of overfitting. Training, validation and testing of neural network is conducted for 75 concrete samples with distinct w/c ratio and amount of superp...
Early-age concrete strength estimation based on piezoelectric sensor using artificial neural network
Kim, Junkyeong; Kim, Ju-Won; Park, Seunghee
2014-04-01
Recently, novel methods to estimate the strength of concrete have been reported based on numerous NDT methods. Especially, electro-mechanical impedance technique using piezoelectric sensors are studied to estimate the strength of concrete. However, the previous research works could not provide the general information about the early-age strength important to manage the quality of concrete and/or the construction process. In order to estimate the early-age strength of concrete, the electro-mechanical impedance method and the artificial neural network(ANN) is utilized in this study. The electro-mechanical impedance varies with the mechanical properties of host structures. Because the strength development is most influential factor among the change of mechanical properties at early-age of curing, it is possible to estimate the strength of concrete by analyzing the change of E/M impedance. The strength of concrete is a complex function of several factors like mix proportion, temperature, elasticity, etc. Because of this, it is hard to mathematically derive equations about strength of concrete. The ANN can provide the solution about early-age strength of concrete without mathematical equations. To verify the proposed approach, a series of experimental studies are conducted. The impedance signals are measured using embedded piezoelectric sensors during curing process and the resonant frequency of impedance is extracted as a strength feature. The strength of concrete is calculated by regression of strength development curve obtained by destructive test. Then ANN model is established by trained using experimental results. Finally the ANN model is verified using impedance data of other sensors.
Strength of Nonmetallic Materials During Nonuniform Heating (Selected Chapters)
1979-05-09
TRANSLATION IS A RENDITION OF THE ORIGI- NAL FOREIGN TEXT WITHOUT ANY ANALYTICAL OR EDITORIAL COMMENT. STATEMENTS OR THEORIES PREPARED BY: ADVOCATED OR... cristobalite . Strength data refer to rods which have been abrasive-finised and then chemically strenrthened. The tensile strength of this =aterial at room...ssmldpcktdet 1.--e imprecision of -a--ple preparation and the arrangemnent of thermocouple Junctions and thermoelectrodes between t,-e saznles, are
Estimating seabed scattering mechanisms via Bayesian model selection.
Steininger, Gavin; Dosso, Stan E; Holland, Charles W; Dettmer, Jan
2014-10-01
A quantitative inversion procedure is developed and applied to determine the dominant scattering mechanism (surface roughness and/or volume scattering) from seabed scattering-strength data. The classification system is based on trans-dimensional Bayesian inversion with the deviance information criterion used to select the dominant scattering mechanism. Scattering is modeled using first-order perturbation theory as due to one of three mechanisms: Interface scattering from a rough seafloor, volume scattering from a heterogeneous sediment layer, or mixed scattering combining both interface and volume scattering. The classification system is applied to six simulated test cases where it correctly identifies the true dominant scattering mechanism as having greater support from the data in five cases; the remaining case is indecisive. The approach is also applied to measured backscatter-strength data where volume scattering is determined as the dominant scattering mechanism. Comparison of inversion results with core data indicates the method yields both a reasonable volume heterogeneity size distribution and a good estimate of the sub-bottom depths at which scatterers occur.
Vertebral Strength and Estimated Fracture Risk Across the BMI Spectrum in Women.
Bachmann, Katherine N; Bruno, Alexander G; Bredella, Miriam A; Schorr, Melanie; Lawson, Elizabeth A; Gill, Corey M; Singhal, Vibha; Meenaghan, Erinne; Gerweck, Anu V; Eddy, Kamryn T; Ebrahimi, Seda; Koman, Stuart L; Greenblatt, James M; Keane, Robert J; Weigel, Thomas; Dechant, Esther; Misra, Madhusmita; Klibanski, Anne; Bouxsein, Mary L; Miller, Karen K
2016-02-01
Somewhat paradoxically, fracture risk, which depends on applied loads and bone strength, is elevated in both anorexia nervosa and obesity at certain skeletal sites. Factor-of-risk (Φ), the ratio of applied load to bone strength, is a biomechanically based method to estimate fracture risk; theoretically, higher Φ reflects increased fracture risk. We estimated vertebral strength (linear combination of integral volumetric bone mineral density [Int.vBMD] and cross-sectional area from quantitative computed tomography [QCT]), vertebral compressive loads, and Φ at L4 in 176 women (65 anorexia nervosa, 45 lean controls, and 66 obese). Using biomechanical models, applied loads were estimated for: 1) standing; 2) arms flexed 90°, holding 5 kg in each hand (holding); 3) 45° trunk flexion, 5 kg in each hand (lifting); 4) 20° trunk right lateral bend, 10 kg in right hand (bending). We also investigated associations of Int.vBMD and vertebral strength with lean mass (from dual-energy X-ray absorptiometry [DXA]) and visceral adipose tissue (VAT, from QCT). Women with anorexia nervosa had lower, whereas obese women had similar, Int.vBMD and estimated vertebral strength compared with controls. Vertebral loads were highest in obesity and lowest in anorexia nervosa for standing, holding, and lifting (p estimated vertebral strength were associated positively with lean mass (R = 0.28 to 0.45, p ≤ 0.0001) in all groups combined and negatively with VAT (R = -[0.36 to 0.38], p estimated vertebral fracture risk (Φ) for holding and bending because of inferior vertebral strength. Despite similar vertebral strength as controls, obese women had higher vertebral fracture risk for standing, holding, and lifting because of higher applied loads from higher body weight. Examining the load-to-strength ratio helps explain increased fracture risk in both low-weight and obese women.
Collagen orientation and leather strength for selected mammals.
Sizeland, Katie H; Basil-Jones, Melissa M; Edmonds, Richard L; Cooper, Sue M; Kirby, Nigel; Hawley, Adrian; Haverkamp, Richard G
2013-01-30
Collagen is the main structural component of leather, skin, and some other applications such as medical scaffolds. All of these materials have a mechanical function, so the manner in which collagen provides them with their strength is of fundamental importance and was investigated here. This study shows that the tear strength of leather across seven species of mammals depends on the degree to which collagen fibrils are aligned in the plane of the tissue. Tear-resistant material has the fibrils contained within parallel planes with little crossover between the top and bottom surfaces. The fibril orientation is observed using small-angle X-ray scattering in leather, produced from skin, with tear strengths (normalized for thickness) of 20-110 N/mm. The orientation index, 0.420-0.633, is linearly related to tear strength such that greater alignment within the plane of the tissue results in stronger material. The statistical confidence and diversity of animals suggest that this is a fundamental determinant of strength in tissue. This insight is valuable in understanding the performance of leather and skin in biological and industrial applications.
Concepts for estimating the fatigue strength of sintered steel components
Götz Sebastian
2014-06-01
Full Text Available The fatigue notch effect can be estimated using fracture mechanics-based support factors. Stress intensity factors for cracks in notches must therefore be calculated. There is a problem of transferability when 2D reference geometries are used for this. This can be avoided when modelling 3D cracks in the notch of the actual part to be assessed. This is more laborious, but leads to better results, which will be shown in this presentation. The questions of aspect ratio and to point on the crack front, where stresses are evaluated, will be discussed. The approach is validated using a broad database of different sintered steels. The statistical evaluation shows an almost exact prediction of the mean value with a relatively small scatter.
Alexandra-Cristina Paunescu
2014-01-01
Full Text Available This study was conducted to identify determinants of bone strength estimated by quantitative ultrasonography (QUS at the calcaneus of Greenlandic Inuit women. A total of 153 Inuit women from Nuuk, aged from 49 to 64 years, participated in the first QUS measurement (year 2000 with an Achilles Lunar instrument (speed of sound (SOS; broadband ultrasound attenuation (BUA; stiffness index (SI. A second measurement was performed two years later (year 2002 in 121 participants. Several factors known to be associated with bone strength were recorded at baseline for 118 of them. Determinants of QUS parameters were identified using an automatic (stepwise selection of variables in linear regression. Significant determinants of baseline QUS measurements were age and body weight for all QUS parameters, height for BUA and SI, and hormone replacement therapy (HRT use for SI. Significant predictors of follow-up QUS measurements were baseline QUS values, the smoking status and HRT use for all QUS parameters, omega-3/omega-6 PUFA content ratio of erythrocytes membrane phospholipids (BUA and SI, and menopausal status (BUA. Several modifiable dietary factors, such as a diet rich in omega-3 PUFAs and lifestyle factors (i.e., smoking, taking HRT, were shown to determine QUS parameters after a follow-up of two years.
Estimating the variation, autocorrelation, and environmental sensitivity of phenotypic selection
Chevin, Luis-Miguel; Visser, Marcel E.; Tufto, Jarle
2015-01-01
Despite considerable interest in temporal and spatial variation of phenotypic selection, very few methods allow quantifying this variation while correctly accounting for the error variance of each individual estimate. Furthermore, the available methods do not estimate the autocorrelation of phenotyp
VSRR - Quarterly provisional estimates for selected birth indicators
U.S. Department of Health & Human Services — Provisional estimates of selected reproductive indicators. Estimates are presented for: general fertility rates, age-specific birth rates, total and low risk...
Estimating the variation, autocorrelation, and environmental sensitivity of phenotypic selection
Chevin, Luis-Miguel; Visser, Marcel E.; Tufto, Jarle
2015-01-01
Despite considerable interest in temporal and spatial variation of phenotypic selection, very few methods allow quantifying this variation while correctly accounting for the error variance of each individual estimate. Furthermore, the available methods do not estimate the autocorrelation of phenotyp
Estimation of a multivariate mean under model selection uncertainty
Georges Nguefack-Tsague
2014-05-01
Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty. When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.
J. Gogoi
2012-01-01
Full Text Available This paper deals with the stress vs. strength problem incorporating multi-componentsystems viz. standby redundancy. The models developed have been illustrated assuming that allthe components in the system for both stress and strength are independent and follow differentprobability distributions viz. Exponential, Gamma and Lindley. Four different conditions forstress and strength have been considered for this investigation. Under these assumptions thereliabilities of the system have been obtained with the help of the particular forms of densityfunctions of n-standby system when all stress-strengths are random variables. The expressions forthe marginal reliabilities R(1, R(2, R(3 etc. have been derived based on its stress- strengthmodels. Then the corresponding system reliabilities Rn have been computed numerically andpresented in tabular forms for different stress-strength distributions with different values of theirparameters. Here we consider n 3 for estimating the system reliability R3.
Bayesian feature selection to estimate customer survival
Figini, Silvia; Giudici, Paolo; Brooks, S P
2006-01-01
We consider the problem of estimating the lifetime value of customers, when a large number of features are present in the data. In order to measure lifetime value we use survival analysis models to estimate customer tenure. In such a context, a number of classical modelling challenges arise. We will show how our proposed Bayesian methods perform, and compare it with classical churn models on a real case study. More specifically, based on data from a media service company, our aim will be to p...
Estimating the strength of the nucleus material of comet 67P Churyumov-Gerasimenko
Basilevsky, A. T.; Krasil'nikov, S. S.; Shiryaev, A. A.; Mall, U.; Keller, H. U.; Skorov, Yu. V.; Mottola, S.; Hviid, S. F.
2016-07-01
Consideration is given to the estimates for the strength of the consolidated material forming the bulk of the nucleus of comet 67P Churyumov-Gerasimenko and those for the strength of the surface material overlying the consolidated material at the sites of the first and last contact of the Philae lander with the nucleus. The strength of the consolidated material was estimated by analyzing the terrain characteristics of the steep cliffs, where the material is exposed on the surface. Based on these estimates, the tensile strength of the material is in the range from 1.5 to 100 Pa; the shear strength, from ˜13 to ⩾30 Pa; and the compressive strength, from 30 to 150 Pa, possibly up to 1.5 kPa. These are very low strength values. Given the dependence of the measurement results on the size of the measured object, they are similar to those of fresh dry snow at -10°C. The (compressive) strength of the surface material at the site of the first touchdown of Philae on the nucleus is estimated from the measurements of the dynamics of the surface impact by the spacecraft's legs and the geometry of the impact pits as 1-3 kPa. For comparison with the measurement results for ice-containing materials in terrestrial laboratories, it needs to be taken into account that the rate of deformation by Philae's legs is four orders of magnitude higher than that in typical terrestrial measurements, leading to a possible overestimation of the strength by roughly an order of magnitude. There was an attemp to put one of the MUPUS sensors into the surface material at the site of the last contact of Philae with the nucleus. Noticeable penetration of the tester probe was not achieved that led to estimation of the minimum compressive strength of the material to be ⩾4 MPa4 This fairly high strength appears to indicate the presence of highly porous ice with grains "frozen" at contacts.
Fatigue Strength Estimation Based on Local Mechanical Properties for Aluminum Alloy FSW Joints
Kittima Sillapasa
2017-02-01
Full Text Available Overall fatigue strengths and hardness distributions of the aluminum alloy similar and dissimilar friction stir welding (FSW joints were determined. The local fatigue strengths as well as local tensile strengths were also obtained by using small round bar specimens extracted from specific locations, such as the stir zone, heat affected zone, and base metal. It was found from the results that fatigue fracture of the FSW joint plate specimen occurred at the location of the lowest local fatigue strength as well as the lowest hardness, regardless of microstructural evolution. To estimate the fatigue strengths of aluminum alloy FSW joints from the hardness measurements, the relationship between fatigue strength and hardness for aluminum alloys was investigated based on the present experimental results and the available wide range of data from the references. It was found as: σa (R = −1 = 1.68 HV (σa is in MPa and HV has no unit. It was also confirmed that the estimated fatigue strengths were in good agreement with the experimental results for aluminum alloy FSW joints.
Heritability estimates of muscle strength-related phenotypes: A systematic review and meta-analysis.
Zempo, H; Miyamoto-Mikami, E; Kikuchi, N; Fuku, N; Miyachi, M; Murakami, H
2016-11-23
The purpose of this study was to clarify the heritability estimates of human muscle strength-related phenotypes (H(2) -msp). A systematic literature search was conducted using PubMed (through August 22, 2016). Studies reporting the H(2) -msp for healthy subjects in a sedentary state were included. Random-effects models were used to calculate the weighted mean heritability estimates. Moreover, subgroup analyses were performed based on phenotypic categories (eg, grip strength, isotonic strength, jumping ability). Sensitivity analyses were also conducted to investigate potential sources of heterogeneity of H(2) -msp, which included age and sex. Twenty-four articles including 58 measurements were included in the meta-analysis. The weighted mean H(2) -msp for all 58 measurements was 0.52 (95% confidence intervals [CI]: 0.48-0.56), with high heterogeneity (I(2) =91.0%, Pstrength, other isometric strength, isotonic strength, isokinetic strength, jumping ability, and other power measurements was 0.56 (95% CI: 0.46-0.67), 0.49 (0.47-0.52), 0.49 (0.32-0.67), 0.49 (0.37-0.61), 0.55 (0.45-0.65), and 0.51 (0.31-0.70), respectively. The H(2) -msp decreased with age (Pstrength-related phenotypes is comparable. Moreover, the role of environmental factors increased with age. These findings may contribute toward an understanding of muscle strength-related phenotypes.
Bone Density, Turnover, and Estimated Strength in Postmenopausal Women Treated With Odanacatib
Brixen, Kim; Chapurlat, Roland; Cheung, Angela M;
2013-01-01
bone compartments and estimated strength at the hip and spine.Design:This was a randomized, double-blind, 2-year trial.Setting:The study was conducted at a private or institutional practice.Participants:Participants included 214 postmenopausal women with low areal BMD.Intervention:The intervention...... included odanacatib 50 mg or placebo weekly.Main Outcome Measures:Changes in areal BMD by dual-energy x-ray absorptiometry (primary end point, 1 year areal BMD change at lumbar spine), bone turnover markers, volumetric BMD by quantitative computed tomography (QCT), and bone strength estimated by finite......-formation marker procollagen I N-terminal peptide initially decreased with odanacatib but by 2 years did not differ from placebo. After 6 months, odanacatib-treated women had greater increases in trabecular volumetric BMD and estimated compressive strength at the spine and integral and trabecular volumetric BMD...
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
The twitch interpolation technique for the estimation of true quadriceps muscle strength.
Nørregaard, J; Lykkegaard, J J; Bülow, P M; Danneskiold-Samsøe, B
1997-09-01
The aim of this study was to examine the reliability of the twitch interpolation technique when used to estimate the true isometric knee extensor muscle strength. This included an examination of whether submaximal activation causes any bias in the estimation of the true muscle strength and an examination of the precision of the method. Twenty healthy subjects completed three contraction series, in which the subjects were told to perform as if their voluntary strength was 60%, 80% or 100% of that determined by a maximal voluntary contraction (MVC). Electrical muscle stimulations were given at each of five different contraction levels in each series. At torque levels above 25% of MVC the relationship between torque and twitch size could be approximated to be linear. The true muscle strength (TMS) could therefore be estimated using linear regression of the twitch-torque relationship to the torque point of no twitch in each of the three series, termed TMS60, TMS80 and TMS100. The TMS80 was slightly lower (P estimated central activation of below 40-50% were excluded. The only moderate precision and the slightly lower estimations in subjects applying submaximal does, however, limit its usefulness.
The estimation of compressive strength of normal and recycled aggregate concrete
Janković Ksenija
2011-01-01
Full Text Available Estimation of concrete strength is an important issue in ready-mixed concrete industry, especially, in proportioning new mixtures and for the quality assurance of the concrete produced. In this article, on the basis of the existing experimental data of compressive strength of normal and recycled aggregate concrete and equation for compressive strength calculating given in Technical regulation are compared. The accuracies of prediction by experimental data obtained in laboratory as well as by EN 1992-1-1, ACI 209 and SRPS U.M1.048 are compared on the basis of the coefficient of determination. The determination of the compressive strengths by the equation described here relies on determination of type of cement and age of concrete with the constant curing temperature.
Ali Abd Elhakam Aliabdo
2012-09-01
Full Text Available This study aims to investigate the relationships between Schmidt hardness rebound number (RN and ultrasonic pulse velocity (UPV versus compressive strength (fc of stones and bricks. Four types of rocks (marble, pink lime stone, white lime stone and basalt and two types of burned bricks and lime-sand bricks were studied. Linear and non-linear models were proposed. High correlations were found between RN and UPV versus compressive strength. Validation of proposed models was assessed using other specimens for each material. Linear models for each material showed good correlations than non-linear models. General model between RN and compressive strength of tested stones and bricks showed a high correlation with regression coefficient R2 value of 0.94. Estimation of compressive strength for the studied stones and bricks using their rebound number and ultrasonic pulse velocity in a combined method was generally more reliable than using rebound number or ultrasonic pulse velocity only.
Nakano, Takemi; Nagata, Kentaro; Yamada, Masafumi; Magatani, Kazushige
2009-01-01
In this study, we describe the application of least square method for muscular strength estimation in hand motion recognition based on surface electromyogram (SEMG). Although the muscular strength can consider the various evaluation methods, a grasp force is applied as an index to evaluate the muscular strength. Today, SEMG, which is measured from skin surface, is widely used as a control signal for many devices. Because, SEMG is one of the most important biological signal in which the human motion intention is directly reflected. And various devices using SEMG are reported by lots of researchers. Those devices which use SEMG as a control signal, we call them SEMG system. In SEMG system, to achieve high accuracy recognition is an important requirement. Conventionally SEMG system mainly focused on how to achieve this objective. Although it is also important to estimate muscular strength of motions, most of them cannot detect power of muscle. The ability to estimate muscular strength is a very important factor to control the SEMG systems. Thus, our objective of this study is to develop the estimation method for muscular strength by application of least square method, and reflecting the result of measured power to the controlled object. Since it was known that SEMG is formed by physiological variations in the state of muscle fiber membranes, it is thought that it can be related with grasp force. We applied to the least-squares method to construct a relationship between SEMG and grasp force. In order to construct an effective evaluation model, four SEMG measurement locations in consideration of individual difference were decided by the Monte Carlo method.
Equations for estimating selected streamflow statistics in Rhode Island
Bent, Gardner C.; Steeves, Peter A.; Waite, Andrew M.
2014-01-01
Regional regression equations were developed for estimating selected natural—unaffected by alteration—streamflows of specific flow durations and low-flow frequency statistics for ungaged stream sites in Rhode Island. Selected at-site streamflow statistics are provided for 41 long-term streamgages, 21 short-term streamgages, and 135 partial-record stations in Rhode Island, eastern Connecticut, and southeastern and south-central Massachusetts. The regression equations for estimating selected streamflow statistics and the at-site statistics estimated for each of the 197 sites may be used by Federal, State, and local water managers in addressing water issues in and near Rhode Island.
Variable selection and estimation for longitudinal survey data
Wang, Li
2014-09-01
There is wide interest in studying longitudinal surveys where sample subjects are observed successively over time. Longitudinal surveys have been used in many areas today, for example, in the health and social sciences, to explore relationships or to identify significant variables in regression settings. This paper develops a general strategy for the model selection problem in longitudinal sample surveys. A survey weighted penalized estimating equation approach is proposed to select significant variables and estimate the coefficients simultaneously. The proposed estimators are design consistent and perform as well as the oracle procedure when the correct submodel was known. The estimating function bootstrap is applied to obtain the standard errors of the estimated parameters with good accuracy. A fast and efficient variable selection algorithm is developed to identify significant variables for complex longitudinal survey data. Simulated examples are illustrated to show the usefulness of the proposed methodology under various model settings and sampling designs. © 2014 Elsevier Inc.
Distributive estimation of frequency selective channels for massive MIMO systems
Zaib, Alam
2015-12-28
We consider frequency selective channel estimation in the uplink of massive MIMO-OFDM systems, where our major concern is complexity. A low complexity distributed LMMSE algorithm is proposed that attains near optimal channel impulse response (CIR) estimates from noisy observations at receive antenna array. In proposed method, every antenna estimates the CIRs of its neighborhood followed by recursive sharing of estimates with immediate neighbors. At each step, every antenna calculates the weighted average of shared estimates which converges to near optimal LMMSE solution. The simulation results validate the near optimal performance of proposed algorithm in terms of mean square error (MSE). © 2015 EURASIP.
Statistical methods for cosmological parameter selection and estimation
Liddle, Andrew R
2009-01-01
The estimation of cosmological parameters from precision observables is an important industry with crucial ramifications for particle physics. This article discusses the statistical methods presently used in cosmological data analysis, highlighting the main assumptions and uncertainties. The topics covered are parameter estimation, model selection, multi-model inference, and experimental design, all primarily from a Bayesian perspective.
Sanzo Miyazawa
Full Text Available BACKGROUND: Empirical substitution matrices represent the average tendencies of substitutions over various protein families by sacrificing gene-level resolution. We develop a codon-based model, in which mutational tendencies of codon, a genetic code, and the strength of selective constraints against amino acid replacements can be tailored to a given gene. First, selective constraints averaged over proteins are estimated by maximizing the likelihood of each 1-PAM matrix of empirical amino acid (JTT, WAG, and LG and codon (KHG substitution matrices. Then, selective constraints specific to given proteins are approximated as a linear function of those estimated from the empirical substitution matrices. RESULTS: Akaike information criterion (AIC values indicate that a model allowing multiple nucleotide changes fits the empirical substitution matrices significantly better. Also, the ML estimates of transition-transversion bias obtained from these empirical matrices are not so large as previously estimated. The selective constraints are characteristic of proteins rather than species. However, their relative strengths among amino acid pairs can be approximated not to depend very much on protein families but amino acid pairs, because the present model, in which selective constraints are approximated to be a linear function of those estimated from the JTT/WAG/LG/KHG matrices, can provide a good fit to other empirical substitution matrices including cpREV for chloroplast proteins and mtREV for vertebrate mitochondrial proteins. CONCLUSIONS/SIGNIFICANCE: The present codon-based model with the ML estimates of selective constraints and with adjustable mutation rates of nucleotide would be useful as a simple substitution model in ML and Bayesian inferences of molecular phylogenetic trees, and enables us to obtain biologically meaningful information at both nucleotide and amino acid levels from codon and protein sequences.
A conditional likelihood is required to estimate the selection coefficient in ancient DNA
Valleriani, Angelo
2016-01-01
Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to end anywhere. Based on the Moran model of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation ...
S. Koley
2017-01-01
Full Text Available : The purpose of this study was of two-fold: first, to estimate the back strength of Indian inter-university male field hockey players and, second, to search the correlations of it with selected anthropometric variables and performance tests. To serve this purpose, a total of nine anthropometric variables, such as height, weight, body mass index, percent body fat, knee height, length of femur, femur biepicondylar diameter, skeletal mass and back strength, and two performance tests, such as sit and reach test and Slalom sprint and dribble test were measured on purposely selected 120 Indian inter-university male hockey players aged 18–25 years collected from the inter-university competition held in Guru Nanak Dev University, Amritsar, India during March, 2014. An adequate number of controls (n=119 were also taken from the same place for comparison. The results showed that the hockey players had the higher mean values in all the variables, except percent body fat and slalom sprint and dribble test than their control counterparts, showing statistically significant differences (p ≤ 0.003 – 0.001 between them. No significant correlations of back strength were found with any of the variables in Indian inter-university male field hockey players. In conclusion, it may be stated that back strength may not be used as one of the indicating factors for the performance of the field hockey players.
Hakan Ersoy; Melek Betül Karsli; Seda Çellek; Bilgehan Kul; Idris Baykan; Robert L Parsons
2013-12-01
Costly and time consuming testing techniques and the difficulties in providing undisturbed samples for these tests have led researchers to estimate strength parameters of soils with simple index tests. However, the paper focuses on estimation of strength parameters of soils as a function of the index properties. Analytical hierarchy process and multiple regression analysis based methodology were performed on datasets obtained from soil tests on 41 samples in Tertiary volcanic regolith. While the hierarchy model focused on determining the most important index properties affecting on strength parameters, regression analysis established meaningful relationships between strength parameters and index properties. The negative polynomial correlations between the friction angle and plasticity properties, and the positive exponential relations between the cohesion and plasticity properties were determined. These relations are characterized by a regression coefficient of 0.80. However, Terzaghi bearing capacity formulas were used to test the model. It is important to see whether there is any statistically significant relation between the calculated and the observed bearing capacity values for model testing. Based on the model, the positive linear correlation characterized by the regression coefficient of 0.86 were determined between bearing capacity values obtained by direct and indirect methods.
Dubois, D.P.; Yereniuk, V.A. [Manitoba Hydro, Winnipeg, MB (Canada)
2003-07-01
Some instabilities have been observed at several dyke locations at the Seven Sisters Generating Station, Manitoba since construction in the late 1940s. The foundations of the dykes are fissured plastic clays. Slope stabilizing methods have been proposed by a number of researchers since the late 1970s for estimating strength parameters for fissured plastic clays. This paper reports on four methods which were used for estimating Mohr-Coulomb strength parameters for stability analyses involving nine dyke locations where instability has been reported in the past. Correlation is established between the calculated safety factors and observed performance in an effort to determine the most appropriate method for this site. It was determined that the most appropriate method was that proposed by P.J. Rivard and Y.Lu in the late 1970s. 16 refs., 1 tab., 7 figs.
Assessment of Residual Strength Based on Estimated Temperature of Post-Heated RC Columns
Muhammad Yaqub
2013-01-01
Full Text Available The experience shows that fire-damaged concrete structures both technically and economically can be reinstated after fire due to high fire resistance and high residual strength. The residual strength of fire-damaged concrete structural member depends on the peak temperature reached during fire, fire duration and the distribution of temperature within the structural member. The assessment of the residual strength of post-heated concrete structural members in a professional way is a prime factor to take a decision about the reinstatement or demolition of fire-damaged structure. This paper provides an easy and efficient approach to predict the residual strength of reinforced concrete columns based on the estimated temperature which may have occurred within the concrete cross-section during a fire. A finite element model was developed to evaluate the distribution of temperature within the cross-section of the reinforced concrete columns. Twelve reinforced concrete square columns were heated experimentally up to 500°C at 150°C/hour. A comparison of the experimental temperature values of the tested columns was made with the model results. A good agreement was found between the experimental and the finite model results. Based on the temperature distribution obtained from the finite element model, the residual strength of concrete and reinforcement could be evaluated by using the relationships for concrete, steel and temperature proposed by various researchers.
Bahman Tarvirdizade
2014-01-01
Full Text Available We consider the estimation of stress-strength reliability based on lower record values when X and Y are independently but not identically inverse Rayleigh distributed random variables. The maximum likelihood, Bayes, and empirical Bayes estimators of R are obtained and their properties are studied. Confidence intervals, exact and approximate, as well as the Bayesian credible sets for R are obtained. A real example is presented in order to illustrate the inferences discussed in the previous sections. A simulation study is conducted to investigate and compare the performance of the intervals presented in this paper and some bootstrap intervals.
BLIND CHANNEL ESTIMATION IN DELAY DIVERSITY FOR FREQUENCY SELECTIVE CHANNELS
Zhao Zheng; Jia Ying; Yin Qinye
2003-01-01
Delay diversity is an effective transmit diversity technique to combat adverse ef-fects of fading. Thus far, previous work in delay diversity assumed that perfect estimates ofcurrent channel fading conditions are available at the receiver and training symbols are requiredto estimate the channel from the transmitter to the receiver. However, increasing the number ofthe antennas increases the required training interval and reduces the available time within whichdata may be transmitted. Learning the channel coefficients becomes increasingly difficult for thefrequency selective channels. In this paper, with the subspace method and the delay character ofdelay diversity, a channel estimation method is proposed, which does not use training symbols. Itaddresses the transmit diversity for a frequency selective channel from a single carrier perspectivein the form of a simple equivalent fiat fading model. Monte Carlo simulations give the perfor-mance of channel estimation and the performance comparison of our channel-estimation-baseddetector with decision feedback equalization, which uses the perfect channel information.
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
A Copula-Based Method for Estimating Shear Strength Parameters of Rock Mass
Da Huang
2014-01-01
Full Text Available The shear strength parameters (i.e., the internal friction coefficient f and cohesion c are very important in rock engineering, especially for the stability analysis and reinforcement design of slopes and underground caverns. In this paper, a probabilistic method, Copula-based method, is proposed for estimating the shear strength parameters of rock mass. The optimal Copula functions between rock mass quality Q and f, Q and c for the marbles are established based on the correlation analyses of the results of 12 sets of in situ tests in the exploration adits of Jinping I-Stage Hydropower Station. Although the Copula functions are derived from the in situ tests for the marbles, they can be extended to be applied to other types of rock mass with similar geological and mechanical properties. For another 9 sets of in situ tests as an extensional application, by comparison with the results from Hoek-Brown criterion, the estimated values of f and c from the Copula-based method achieve better accuracy. Therefore, the proposed Copula-based method is an effective tool in estimating rock strength parameters.
The Optimal Selection for Restricted Linear Models with Average Estimator
Qichang Xie
2014-01-01
Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.
Challenges in estimating insecticide selection pressures from mosquito field data.
Susana Barbosa
2011-11-01
Full Text Available Insecticide resistance has the potential to compromise the enormous effort put into the control of dengue and malaria vector populations. It is therefore important to quantify the amount of selection acting on resistance alleles, their contributions to fitness in heterozygotes (dominance and their initial frequencies, as a means to predict the rate of spread of resistance in natural populations. We investigate practical problems of obtaining such estimates, with particular emphasis on Mexican populations of the dengue vector Aedes aegypti. Selection and dominance coefficients can be estimated by fitting genetic models to field data using maximum likelihood (ML methodology. This methodology, although widely used, makes many assumptions so we investigated how well such models perform when data are sparse or when spatial and temporal heterogeneity occur. As expected, ML methodologies reliably estimated selection and dominance coefficients under idealised conditions but it was difficult to recover the true values when datasets were sparse during the time that resistance alleles increased in frequency, or when spatial and temporal heterogeneity occurred. We analysed published data on pyrethroid resistance in Mexico that consists of the frequency of a Ile1,016 mutation. The estimates for selection coefficient and initial allele frequency on the field dataset were in the expected range, dominance coefficient points to incomplete dominance as observed in the laboratory, although these estimates are accompanied by strong caveats about possible impact of spatial and temporal heterogeneity in selection.
LIU Lin; FAN Ping-zhi
2008-01-01
The performance of a cellular location system based on received signal strength difference (RSSD) is investigated. In the cellular location system, each mobile station needs to measure the signal strength transmitted by surrounding base stations, and sends its measurements to the service base station. Using the strength differonce between the service base station and neighboring base stations, the position of a mobile station is estimated. The related Cramer-Rao lower bound (CRLB) on the location error of this method was derived, and numerical simulations are made to discuss the influences of the number of base stations, correlation coefficient of shadowing attenuation, and cell radius on CRLB. The results show that the CRLB is positively correlated with the standard deviation of shadowing attenuation and cell radius, but negatively correlated with the number of base stations and the correlation coefficient of shadowing attenuation. In addition, the CRLB results obtained in this paper were compared with those of the cellular location system based on received signal strength (RSS) measurements, which reveals that the former is more fight.
Lunar soil strength estimation based on Chang'E-3 images
Gao, Yang; Spiteri, Conrad; Li, Chun-Lai; Zheng, Yong-Chun
2016-11-01
Chang'E-3 (CE-3) was the third mission by China to explore the Moon which had landed two spacecraft, the CE-3 lander and Yutu rover on the lunar surface in late 2013. The paper presents analytical results of high-resolution terrain data taken by CE-3's onboard cameras. The image data processing aims to extract sinkage profiles of the wheel tracks during the rover traverse. Further analysis leads to derivation or estimation of lunar soil physical properties (in terms of strength and stiffness) based on the wheel sinkage, despite the fact Yutu does not possess in situ soil measurement instruments. Our findings indicate that the lunar soil at the CE-3 landing site has similar stiffness to what is measured at the Luna 17 landing site but has much less strength compared to the Apollo 15 landing site.
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
High-dimensional model estimation and model selection
CERN. Geneva
2015-01-01
I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.
Software Effort Estimation with Ridge Regression and Evolutionary Attribute Selection
Papatheocharous, Efi; Andreou, Andreas S
2010-01-01
Software cost estimation is one of the prerequisite managerial activities carried out at the software development initiation stages and also repeated throughout the whole software life-cycle so that amendments to the total cost are made. In software cost estimation typically, a selection of project attributes is employed to produce effort estimations of the expected human resources to deliver a software product. However, choosing the appropriate project cost drivers in each case requires a lot of experience and knowledge on behalf of the project manager which can only be obtained through years of software engineering practice. A number of studies indicate that popular methods applied in the literature for software cost estimation, such as linear regression, are not robust enough and do not yield accurate predictions. Recently the dual variables Ridge Regression (RR) technique has been used for effort estimation yielding promising results. In this work we show that results may be further improved if an AI meth...
Erasing errors due to alignment ambiguity when estimating positive selection.
Redelings, Benjamin
2014-08-01
Current estimates of diversifying positive selection rely on first having an accurate multiple sequence alignment. Simulation studies have shown that under biologically plausible conditions, relying on a single estimate of the alignment from commonly used alignment software can lead to unacceptably high false-positive rates in detecting diversifying positive selection. We present a novel statistical method that eliminates excess false positives resulting from alignment error by jointly estimating the degree of positive selection and the alignment under an evolutionary model. Our model treats both substitutions and insertions/deletions as sequence changes on a tree and allows site heterogeneity in the substitution process. We conduct inference starting from unaligned sequence data by integrating over all alignments. This approach naturally accounts for ambiguous alignments without requiring ambiguously aligned sites to be identified and removed prior to analysis. We take a Bayesian approach and conduct inference using Markov chain Monte Carlo to integrate over all alignments on a fixed evolutionary tree topology. We introduce a Bayesian version of the branch-site test and assess the evidence for positive selection using Bayes factors. We compare two models of differing dimensionality using a simple alternative to reversible-jump methods. We also describe a more accurate method of estimating the Bayes factor using Rao-Blackwellization. We then show using simulated data that jointly estimating the alignment and the presence of positive selection solves the problem with excessive false positives from erroneous alignments and has nearly the same power to detect positive selection as when the true alignment is known. We also show that samples taken from the posterior alignment distribution using the software BAli-Phy have substantially lower alignment error compared with MUSCLE, MAFFT, PRANK, and FSA alignments.
Shyamal Koley
2012-01-01
Full Text Available The purpose of the present study was to estimate the dominant handgrip strength and its correlations with selected anthropometric and physiological characteristics in inter-university volleyball players. Three anthropometric characteristics, four body composition parameters, two physical and two physiological characteristics were measured on randomly selected 63 inter-university volleyball players (38 males and 25 females aged 18.25 years from six Indian universities, competition was held in Guru Nanak Dev University, Amritsar, Punjab, India. An adequate number of controls (n = 102, 52 males and 50 females were also taken from the same place for comparisons. In results, one way analysis of variance showed significant (p . .004 - .000 differences in all the variables between volleyball players and controls. In volley players, significantly positive correlations were found between right and left handgrip strength and all the variables studied except percent body fat (where the correlations were significantly negative. It may be concluded that dominant handgrip strength had some strong positive correlations with all the variables studied in inter-university volleyball players.
Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2011-01-01
’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...
Estimation and Direct Equalization of Doubly Selective Channels
Leus Geert
2006-01-01
Full Text Available We propose channel estimation and direct equalization techniques for transmission over doubly selective channels. The doubly selective channel is approximated using the basis expansion model (BEM. Linear and decision feedback equalizers implemented by time-varying finite impulse response (FIR filters may then be used to equalize the doubly selective channel, where the time-varying FIR filters are designed according to the BEM. In this sense, the equalizer BEM coefficients are obtained either based on channel estimation or directly. The proposed channel estimation and direct equalization techniques range from pilot-symbol-assisted-modulation- (PSAM- based techniques to blind and semiblind techniques. In PSAM techniques, pilot symbols are utilized to estimate the channel or directly obtain the equalizer coefficients. The training overhead can be completely eliminated by using blind techniques or reduced by combining training-based techniques with blind techniques resulting in semiblind techniques. Numerical results are conducted to verify the different proposed channel estimation and direct equalization techniques.
Selective transmission and channel estimation in massive MIMO systems
杨睿哲
2016-01-01
Massive MIMO systems have got extraordinary spectral efficiency using a large number of base station antennas, but it is in the challenge of pilot contamination using the aligned pilots.To address this issue, a selective transmission is proposed using time-shifted pilots with cell grouping, where the strong interfering users in downlink transmission cells are temporally stopped during the pilots transmission in uplink cells.Based on the spatial characteristics of physical channel models, the strong interfering users are selected to minimize the inter-cell interference and the cell grouping is designed to have less temporally stopped users within a smaller area.Furthermore, a Kalman estima-tor is proposed to reduce the unexpected effect of residual interferences in channel estimation, which exploits both the spatial-time correlation of channels and the share of the interference information. The numerical results show that our scheme significantly improves the channel estimation accuracy and the data rates.
Autoregressive model selection with simultaneous sparse coefficient estimation
Sang, Hailin
2011-01-01
In this paper we propose a sparse coefficient estimation procedure for autoregressive (AR) models based on penalized conditional maximum likelihood. The penalized conditional maximum likelihood estimator (PCMLE) thus developed has the advantage of performing simultaneous coefficient estimation and model selection. Mild conditions are given on the penalty function and the innovation process, under which the PCMLE satisfies a strong consistency, local $N^{-1/2}$ consistency, and oracle property, respectively, where N is sample size. Two penalty functions, least absolute shrinkage and selection operator (LASSO) and smoothly clipped average deviation (SCAD), are considered as examples, and SCAD is shown to have better performances than LASSO. A simulation study confirms our theoretical results. At the end, we provide an application of our method to a historical price data of the US Industrial Production Index for consumer goods, and the result is very promising.
The effect of selective oxidation of chromium on the creep strength of alloy 617
Ennis, P.; Quadakkers, W.; H. Schuster
1993-01-01
In order to investigate the effect on creep strength of the selective oxidation of chromium which causes the formation of a carbide-fi-ee subsurface zone, specimens of Ni22Cr12Co9Mo1Al (Alloy 617) were subjected to heat treatments to simulate a long-term service exposure of a thin-walled heat exchanger tube operating at high temperatures. In creep tests carried out at 900°C, specimens with extensive chromium-depleted and carbide-free subsurface zones exhibited higher creep strength than speci...
Tuning target selection algorithms to improve galaxy redshift estimates
Hoyle, Ben; Rau, Markus Michael; Seitz, Stella; Weller, Jochen
2015-01-01
We showcase machine learning (ML) inspired target selection algorithms to determine which of all potential targets should be selected first for spectroscopic follow up. Efficient target selection can improve the ML redshift uncertainties as calculated on an independent sample, while requiring less targets to be observed. We compare the ML targeting algorithms with the Sloan Digital Sky Survey (SDSS) target order, and with a random targeting algorithm. The ML inspired algorithms are constructed iteratively by estimating which of the remaining target galaxies will be most difficult for the machine learning methods to accurately estimate redshifts using the previously observed data. This is performed by predicting the expected redshift error and redshift offset (or bias) of all of the remaining target galaxies. We find that the predicted values of bias and error are accurate to better than 10-30% of the true values, even with only limited training sample sizes. We construct a hypothetical follow-up survey and fi...
The Diversity Potential of Relay Selection with Practical Channel Estimation
Michalopoulos, Diomidis S; Schober, Robert; Karagiannidis, George K
2011-01-01
We investigate the diversity order of decode-and-forward relay selection in Nakagami-m fading, in cases where practical channel estimation techniques are applied. In this respect, we introduce a unified model for the imperfect channel estimates, where the effects of noise, time-varying channels, and feedback delays are jointly considered. Based on this model, the correlation between the actual and the estimated channel values, \\rho, is expressed as a function of the signal-to-noise ratio (SNR), yielding closed-form expressions for the overall outage probability as a function of \\rho. The resulting diversity order and power gain reveal a high dependence of the performance of relay selection on the high SNR behavior of \\rho, thus shedding light onto the effect of channel estimation on the overall performance. It is shown that when the channel estimates are not frequently updated in applications involving time-varying channels, or when the amount of power allocated for channel estimation is not sufficiently high...
Adamczak Stanisław
2014-08-01
Full Text Available The aim of this study was to estimate the measurement uncertainty for a material produced by additive manufacturing. The material investigated was FullCure 720 photocured resin, which was applied to fabricate tensile specimens with a Connex 350 3D printer based on PolyJet technology. The tensile strength of the specimens established through static tensile testing was used to determine the measurement uncertainty. There is a need for extensive research into the performance of model materials obtained via 3D printing as they have not been studied sufficiently like metal alloys or plastics, the most common structural materials. In this analysis, the measurement uncertainty was estimated using a larger number of samples than usual, i.e., thirty instead of typical ten. The results can be very useful to engineers who design models and finished products using this material. The investigations also show how wide the scatter of results is.
Bias in the Weibull Strength Estimation of a SiC Fiber for the Small Gauge Length Case
Morimoto, Tetsuya; Nakagawa, Satoshi; Ogihara, Shinji
It is known that the single-modal Weibull model describes well the size effect of brittle fiber tensile strength. However, some ceramic fibers have been reported that single-modal Weibull model provided biased estimation on the gauge length dependence. A hypothesis on the bias is that the density of critical defects is very small, thus, fracture probability of small gauge length samples distributes in discrete manner, which makes the Weibull parameters dependent on the gauge length. Tyranno ZMI Si-Zr-C-O fiber has been selected as an example fiber. The tensile tests have been done on several gauge lengths. The derived Weibull parameters have shown a dependence on the gauge length. Fracture surfaces were observed with SEM. Then we classified the fracture surfaces into the characteristic fracture patterns. Percentage of each fracture pattern was found dependent on the gauge length, too. This may be an important factor of the Weibull parameter dependence on the gauge length.
Tuning target selection algorithms to improve galaxy redshift estimates
Hoyle, Ben; Paech, Kerstin; Rau, Markus Michael; Seitz, Stella; Weller, Jochen
2016-06-01
We showcase machine learning (ML) inspired target selection algorithms to determine which of all potential targets should be selected first for spectroscopic follow-up. Efficient target selection can improve the ML redshift uncertainties as calculated on an independent sample, while requiring less targets to be observed. We compare seven different ML targeting algorithms with the Sloan Digital Sky Survey (SDSS) target order, and with a random targeting algorithm. The ML inspired algorithms are constructed iteratively by estimating which of the remaining target galaxies will be most difficult for the ML methods to accurately estimate redshifts using the previously observed data. This is performed by predicting the expected redshift error and redshift offset (or bias) of all of the remaining target galaxies. We find that the predicted values of bias and error are accurate to better than 10-30 per cent of the true values, even with only limited training sample sizes. We construct a hypothetical follow-up survey and find that some of the ML targeting algorithms are able to obtain the same redshift predictive power with 2-3 times less observing time, as compared to that of the SDSS, or random, target selection algorithms. The reduction in the required follow-up resources could allow for a change to the follow-up strategy, for example by obtaining deeper spectroscopy, which could improve ML redshift estimates for deeper test data.
Photo-z Estimation: An Example of Nonparametric Conditional Density Estimation under Selection Bias
Izbicki, Rafael; Freeman, Peter E
2016-01-01
Redshift is a key quantity for inferring cosmological model parameters. In photometric redshift estimation, cosmologists use the coarse data collected from the vast majority of galaxies to predict the redshift of individual galaxies. To properly quantify the uncertainty in the predictions, however, one needs to go beyond standard regression and instead estimate the full conditional density f(z|x) of a galaxy's redshift z given its photometric covariates x. The problem is further complicated by selection bias: usually only the rarest and brightest galaxies have known redshifts, and these galaxies have characteristics and measured covariates that do not necessarily match those of more numerous and dimmer galaxies of unknown redshift. Unfortunately, there is not much research on how to best estimate complex multivariate densities in such settings. Here we describe a general framework for properly constructing and assessing nonparametric conditional density estimators under selection bias, and for combining two o...
Impact of Selected Parameters on the Fatigue Strength of Splices on Multiply Textile Conveyor Belts
Bajda, Mirosław; Błażej, Ryszard; Hardygóra, Monika
2016-10-01
Splices are the weakest points in the conveyor belt loop. The strength of these joints, and thus their design as well as the method and quality of splicing, determine the strength of the whole conveyor belt loop. A special zone in a splice exists, where the stresses in the adjacent plies or cables differ considerably from each other. This results in differences in the elongation of these elements and in additional shearing stresses in the rubber layer. The strength of the joints depends on several factors, among others on the parameters of the joined belt, on the connecting layer and the technology of joining, as well as on the materials used to make the joint. The strength of the joint constitutes a criterion for the selection of a belt suitable for the operating conditions, and therefore methods of testing such joints are of great importance. This paper presents the method of testing fatigue strength of splices made on multi-ply textile conveyor belts and the results of these studies.
The Strengths of r- and K-Selection Shape Diversity-Disturbance Relationships
Bohn, Kristin; Pavlick, Ryan; Reu, Björn; Kleidon, Axel
2014-01-01
Disturbance is a key factor shaping species abundance and diversity in plant communities. Here, we use a mechanistic model of vegetation diversity to show that different strengths of r- and K-selection result in different disturbance-diversity relationships. R- and K-selection constrain the range of viable species through the colonization-competition tradeoff, with strong r-selection favoring colonizers and strong K-selection favoring competitors, but the level of disturbance also affects the success of species. This interplay among r- and K-selection and disturbance results in different shapes of disturbance-diversity relationships, with little variation of diversity with no r- and no K-selection, a decrease in diversity with r-selection with disturbance rate, an increase in diversity with K-selection, and a peak at intermediate values with strong r- and K-selection. We conclude that different disturbance-diversity relationships found in observations may reflect different intensities of r- and K-selection within communities, which should be inferable from broader observations of community composition and their ecophysiological trait ranges. PMID:24763335
The strengths of r- and K-selection shape diversity-disturbance relationships.
Kristin Bohn
Full Text Available Disturbance is a key factor shaping species abundance and diversity in plant communities. Here, we use a mechanistic model of vegetation diversity to show that different strengths of r- and K-selection result in different disturbance-diversity relationships. R- and K-selection constrain the range of viable species through the colonization-competition tradeoff, with strong r-selection favoring colonizers and strong K-selection favoring competitors, but the level of disturbance also affects the success of species. This interplay among r- and K-selection and disturbance results in different shapes of disturbance-diversity relationships, with little variation of diversity with no r- and no K-selection, a decrease in diversity with r-selection with disturbance rate, an increase in diversity with K-selection, and a peak at intermediate values with strong r- and K-selection. We conclude that different disturbance-diversity relationships found in observations may reflect different intensities of r- and K-selection within communities, which should be inferable from broader observations of community composition and their ecophysiological trait ranges.
Black Hole Mass Estimates of Radio Selected Quasars
Oshlack, Alicia; Webster, Rachel; Whiting, Matthew
2002-01-01
The black hole (BH) mass in the centre of AGN has been estimated for a sample of radio-selected flat-spectrum quasars to investigate the relationship between BH mass and radio properties of quasars. We have used the virial assumption with measurements of the H$\\beta$ FWHM and luminosity to estimate the central BH mass. In contrast to previous studies we find no correlation between BH mass and radio power in these AGN. We find a range in BH mass similar to that seen in radio-quiet quasars from...
Effects of Reduced Strength on Self-Selected Pacing for Long-Duration Activities
Buxton, Roxanne E.; Ryder, Jeffrey W.; English, Kirk E.; Guined, Jamie R.; Ploutz-Snyder, Lori L.
2015-01-01
Strength and aerobic capacity are predictors of astronaut performance for extravehicular activities (EVA) during exploration missions. It is expected that astronauts will self-select a pace below their ventilatory threshold (VT). PURPOSE: To determine the percentage of VT that subjects self-select for prolonged occupational tasks. METHODS: Maximal aerobic capacity and a variety of lower-body strength and power variables were assessed in 17 subjects who climbed 480 rungs on a ladder ergometer and then completed 10 km on a treadmill as quickly as possible using a self-selected pace. The tasks were performed on 4 days, with a weighted suit providing 0% (suit fabric only), 40%, 60%, and 80% of additional bodyweight (BW), thereby altering the strength to BW ratio. Oxygen consumption and heart rate were continuously measured. Repeated measures ANOVA and post-hoc comparisons were performed on the percent of VT values under each suited condition. RESULTS: Subjects consistently self-paced at or below VT for both tasks and the pace was related to suit weight. At the midpoint for the ladder climb the 80% BW condition elicited the lowest metabolic cost (-19+/-14% below VT), significantly different than the 0% BW (-3+/-16%, P=0.002) and the 40% BW conditions (-5+/-22%, P=0.023). The 60% BW condition (-13+/-19%) was different than the 40% BW condition (P=0.034). Upon completion of the ladder task there were no differences among the conditions (0%BW: 3+/-18%; 40%BW: 3+/-21%; 60%BW: - 8+/-25%; 80%BW: -10+/-18%). All subjects failed to complete 5km at 80%BW. At the midpoint of the treadmill test the three remaining conditions were all significantly different (0%BW: -20+/-15%; 40%BW: - 33+/-15%; 60%BW: -41+/-19%). Upon completion of the treadmill test the 60% BW condition (-38+/-12%) was significantly different than the 40% BW (-28+/-15%, P=0.024). CONCLUSIONS: Decreasing relative strength results in progressive and disproportionate decreases (relative to VT) in self-selected pacing
The importance of estimating selection bias on prevalence estimates, shortly after a disaster.
Grievink, L.; Velden, P.G. van der; Yzermans, C.J.; Roorda, J.; Stellato, R.K.
2006-01-01
PURPOSE: The aim was to study selective participation and its effect on prevalence estimates in a health survey of affected residents 3 weeks after a man-made disaster in The Netherlands (May 13, 2000). METHODS: All affected adult residents were invited to participate. Survey (questionnaire) data we
The importance of estimating selection bias on prevalence estimates shortly after a disaster.
Grievink, Linda; Velden, Peter G van der; Yzermans, C Joris; Roorda, Jan; Stellato, Rebecca K
2006-01-01
PURPOSE: The aim was to study selective participation and its effect on prevalence estimates in a health survey of affected residents 3 weeks after a man-made disaster in The Netherlands (May 13, 2000). METHODS: All affected adult residents were invited to participate. Survey (questionnaire) data we
The importance of estimating selection bias on prevalence estimates shortly after a disaster.
Grievink, Linda; Velden, Peter G van der; Yzermans, C Joris; Roorda, Jan; Stellato, Rebecca K
2006-01-01
PURPOSE: The aim was to study selective participation and its effect on prevalence estimates in a health survey of affected residents 3 weeks after a man-made disaster in The Netherlands (May 13, 2000). METHODS: All affected adult residents were invited to participate. Survey (questionnaire) data
The importance of estimating selection bias on prevalence estimates, shortly after a disaster.
Grievink, L.; Velden, P.G. van der; Yzermans, C.J.; Roorda, J.; Stellato, R.K.
2006-01-01
PURPOSE: The aim was to study selective participation and its effect on prevalence estimates in a health survey of affected residents 3 weeks after a man-made disaster in The Netherlands (May 13, 2000). METHODS: All affected adult residents were invited to participate. Survey (questionnaire) data
Sair Kahraman; Michael Alber
2014-10-01
Fault breccias are usually not suitable for preparing smooth specimens or else the preparation of such specimens is tedious, time consuming and expensive. To develop a predictive model for the uniaxial compressive strength (UCS) of a fault breccia from electrical resistivity values obtained from the electrical impedance spectroscopy measurements, twenty-four samples of a fault breccia were tested in the laboratory. The UCS values were correlated with corresponding resistivity values and a strong correlation between them could not be found. However, a strong correlation was found for the samples having volumetric block proportion (VBP) of 25–75%. In addition, it was seen that VBP strongly correlated with resistivity. It was concluded that the UCS of the tested breccia can be estimated from resistivity for the samples having VBP of 25–75%.
无
2006-01-01
A method to estimate the probabilistic density function (PDF) of shear strength parameters was proposed. The second Chebyshev orthogonal polynomial(SCOP) combined with sample moments (the originmoments)was used to approximate the PDF of parameters. χ2 test was adopted to verify the availability of the method. It is distribution-free because no classical theoretical distributions were assumed in advance and the inference result provides a universal form of probability density curves. Six most commonly-used theoretical distributions named normal, lognormal, extreme value Ⅰ , gama, beta and Weibull distributions were used to verify SCOP method. An example from the observed data of cohesion c of a kind of silt clay was presented for illustrative purpose. The results show that the acceptance levels in SCOP are all smaller than those in the classical finite comparative method and the SCOP function is more accurate and effective in the reliability analysis of geotechnical engineering.
Grohs, Jacob R.; Li, Yongqiang; Dillard, David A.; Case, Scott W.; Ellis, Michael W.; Lai, Yeh-Hung; Gittleman, Craig S.
Temperature and humidity fluctuations in operating fuel cells impose significant biaxial stresses in the constrained proton exchange membranes (PEMs) of a fuel cell stack. The strength of the PEM, and its ability to withstand cyclic environment-induced stresses, plays an important role in membrane integrity and consequently, fuel cell durability. In this study, a pressure loaded blister test is used to characterize the biaxial strength of Gore-Select ® series 57 over a range of times and temperatures. Hencky's classical solution for a pressurized circular membrane is used to estimate biaxial strength values from burst pressure measurements. A hereditary integral is employed to construct the linear viscoelastic analog to Hencky's linear elastic exact solution. Biaxial strength master curves are constructed using traditional time-temperature superposition principle techniques and the associated temperature shift factors show good agreement with shift factors obtained from constitutive (stress relaxation) and fracture (knife slit) tests of the material.
Gap measurement and bond strength of five selected adhesive systems bonded to tooth structure.
Arbabzadeh, F; Gage, J P; Young, W G; Shahabi, S; Swenson, S M
1998-06-01
The ability of a restorative material to bond and seal the interface with tooth structure is perhaps the most significant factor in determining resistance to marginal caries. Thus, the quality and durability of marginal seal and bond strength are major considerations in the selection of restorative materials. The purpose of this study was to compare the bond strength and marginal discrepancies of five adhesive systems: All-Bond 2, Clearfil Liner Bond, KB 200, ProBond and AELITE Bond. Twenty-five buccal and 25 lingual cavities were prepared in 25 caries-free extracted molar teeth, giving 10 cavities for each of the 5 adhesive systems. All teeth were restored with the resin composite Pertac Hybrid, or PRISMA Total Performance Hybrid with their appropriate adhesive systems. After restoration, the teeth were thermocycled, were stained with a 1.5% aqueous solution of a procion dye (reactive orange 14) and sectioned coronally with a saw microtome. Three sections of 200 microns thickness were prepared from each restoration which were then examined microscopically to measure marginal gap widths using a confocal tandem microscope. Shear bond strength measurements were carried out on the dentine bond using a universal testing machine. The All-Bond 2 adhesive system was found to have higher shear bond strength and to have the least gap width at the cementodentinal margin.
FTIR spectra and mechanical strength analysis of some selected rubber derivatives.
Gunasekaran, S; Natarajan, R K; Kala, A
2007-10-01
Rubber materials have wide range of commercial applications such as, infant diapers, famine hygiene products, drug delivery devices and incontinency products such as rubber tubes, tyres, etc. In the present work, studies on mechanical properties of some selected rubber materials viz., natural rubber (NR), styrene butadiene rubber (SBR), nitrile butadiene rubber (NBR) and ethylene propylene diene monomer (EPDM) have been carried out in three states viz., raw, vulcanized and reinforced. To enhance the quality of rubber elastomers, an attempt is made to prepare new elastomers called polyblends. In the present study an attempt is made to blend NR with NBR and with EPDM. We here report, a novel approach for the evaluation of various physico-mechanical properties such as mechanical strength, tensile strength, elongation and hardness. The method is simple, direct and fast and involves infrared spectral measurements for the evaluation of these properties. With the applications of modern infrared spectroscopy, the mechanical strength of these rubber materials have been analyzed by calculating the internal standards among the methyl and methylene group vibrational frequencies obtained from FTIR spectroscopy. Also the tensile strength measurements carried out by universal testing machine. The results pertaining physico-mechanical properties of the rubber derivatives undertaken in the present study obtained by IR-based method are in good agreement with data resulted from the standard methods.
Parameter estimation and model selection in computational biology.
Gabriele Lillacci
2010-03-01
Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.
Murakami, Hiroki; Watanabe, Tsuneo; Fukuoka, Daisuke; Terabayashi, Nobuo; Hara, Takeshi; Muramatsu, Chisako; Fujita, Hiroshi
2016-04-01
The word "Locomotive syndrome" has been proposed to describe the state of requiring care by musculoskeletal disorders and its high-risk condition. Reduction of the knee extension strength is cited as one of the risk factors, and the accurate measurement of the strength is needed for the evaluation. The measurement of knee extension strength using a dynamometer is one of the most direct and quantitative methods. This study aims to develop a system for measuring the knee extension strength using the ultrasound images of the rectus femoris muscles obtained with non-invasive ultrasonic diagnostic equipment. First, we extract the muscle area from the ultrasound images and determine the image features, such as the thickness of the muscle. We combine these features and physical features, such as the patient's height, and build a regression model of the knee extension strength from training data. We have developed a system for estimating the knee extension strength by applying the regression model to the features obtained from test data. Using the test data of 168 cases, correlation coefficient value between the measured values and estimated values was 0.82. This result suggests that this system can estimate knee extension strength with high accuracy.
The Transverse Rupture Strength in Ti-6Al-4V Alloy Manufactured by Selective Laser Melting
Lai Pang-Hsin
2015-01-01
Full Text Available The objective of this study was to investigate the transverse rupture strength and apparent hardness of selective laser melted Ti-6Al-4V alloys manufactured in the vertical (V and horizontal (H directions. The microstructure and the distribution of alloy elements were examined by optical microscope and electron probe microanalysis, respectively. The results show that the columnar α′ grains are formed along the building direction, and the elemental distributions of Ti, Al, and V are homogeneous in the alloy. The building direction does not sufficiently affect the density and apparent hardness. However, the transverse rupture strengths (TRS are obviously dominated by the building directions investigated in this study. The TRS of an H specimen is significantly superior to that of a V specimen by 48%. This phenomenon can be mainly attributed to the presence of disc-shaped pores.
Estimates of fault strength from the Variscan foreland of the northern UK
Copley, Alex; Woodcock, Nigel
2016-10-01
We provide new insights into the long-standing debate regarding fault strength, by studying structures active in the late Carboniferous in the foreland of the Variscan Mountain range in the northern UK. We describe a method to estimate the seismogenic thickness for ancient deformation zones, at the time they were active, based upon the geometry of fault-bounded extensional basins. We then perform calculations to estimate the forces exerted between mountain ranges and their adjacent lowlands in the presence of thermal and compositional effects on the density. We combine these methods to calculate an upper bound on the stresses that could be supported by faults in the Variscan foreland before they began to slip. We find the faults had a low effective coefficient of friction (i.e. 0.02-0.24), and that the reactivated pre-existing faults were at least 30% weaker than unfaulted rock. These results show structural inheritance to be important, and suggest that the faults had a low intrinsic coefficient of friction, high pore-fluid pressures, or both.
Multi-task GLOH feature selection for human age estimation
Liang, Yixiong; Xu, Ying; Xiang, Yao; Zou, Beiji
2011-01-01
In this paper, we propose a novel age estimation method based on GLOH feature descriptor and multi-task learning (MTL). The GLOH feature descriptor, one of the state-of-the-art feature descriptor, is used to capture the age-related local and spatial information of face image. As the exacted GLOH features are often redundant, MTL is designed to select the most informative feature bins for age estimation problem, while the corresponding weights are determined by ridge regression. This approach largely reduces the dimensions of feature, which can not only improve performance but also decrease the computational burden. Experiments on the public available FG-NET database show that the proposed method can achieve comparable performance over previous approaches while using much fewer features.
Gayton, Scott D; Kehoe, E James
2015-02-01
For entry into the Australian Army Special Forces (SF), applicants undergo a barrage of strenuous physical and psychological assessments. Despite this screening, subsequent attrition rates in the first weeks of initial selection courses are typically high, and entry testing results have had limited success for predicting who will complete these courses. An SF applicant's character is often thought to be a decisive factor; however, this claim has remained untested. Accordingly, SF applicants (N = 115) were asked to rank themselves on 24 character strengths at the start of the selection process. Successful applicants (n =18) assigned their top ranks to team worker (72%), integrity (67%), and persistence (50%). Applicants (n = 31) who did not include any of those three strengths in their top ranks all failed to complete the selection process. In contrast, successful versus unsuccessful applicants did not discernibly differ on physical assessments and a written test. Results are discussed with respect to their implications for enhancing the assessment of SF applicants. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.
Fatigue strength of a Ti-6Al-4V alloy produced by selective laser melting
Gerov, M. V.; Vladislavskaya, E. Yu.; Terent'ev, V. F.; Prosvirnin, D. V.; Kolmakov, A. G.; Antonova, O. S.
2016-10-01
The fatigue properties and the fracture mechanisms of the Ti-6Al-4V alloy produced by selective laser melting (SLM) from a powder of an CL41TiELI titanium alloy have been studied. Cylindrical blanks were grown at angles of 90° and 45° to a platform. The best fatigue strength is observed in the samples the blanks of which were grown at an angle of 45°. It is found that the structure of the SLM material can contain portions with unmelted powder particles, which are the places of initiation of fatigue cracks.
Aging Behavior of High-Strength Al Alloy 2618 Produced by Selective Laser Melting
Casati, Riccardo; Lemke, Jannis Nicolas; Alarcon, Adrianni Zanatta; Vedani, Maurizio
2017-02-01
High Si-bearing Al alloys are commonly used in additive manufacturing, but they have moderate mechanical properties. New high-strength compositions are necessary to spread the use of additively manufactured Al parts for heavy-duty structural applications. This work focuses on the microstructure, mechanical behavior, and aging response of an Al alloy 2618 processed by selective laser melting. Calorimetric analysis, electron microscopy, and compression tests were performed in order to correlate the mechanical properties with the peculiar microstructure induced by laser melting and thermal treatments
Weber, Marco; Ruch, Willibald
2012-01-01
The present study investigated the role of 24 character strengths in 87 adolescent romantic relationships focusing on their role in partner selection and their role in mates' life satisfaction. Measures included the Values in Action Inventory of Strengths for Youth, the Students' Life Satisfaction Scale, and an Ideal Partner Profiler for the composition of an ideal partner. Honesty, humor, and love were the most preferred character strengths in an ideal partner. Hope, religiousness, honesty, ...
Quantiles, parametric-select density estimation, and bi-information parameter estimators
Parzen, E.
1982-01-01
A quantile-based approach to statistical analysis and probability modeling of data is presented which formulates statistical inference problems as functional inference problems in which the parameters to be estimated are density functions. Density estimators can be non-parametric (computed independently of model identified) or parametric-select (approximated by finite parametric models that can provide standard models whose fit can be tested). Exponential models and autoregressive models are approximating densities which can be justified as maximum entropy for respectively the entropy of a probability density and the entropy of a quantile density. Applications of these ideas are outlined to the problems of modeling: (1) univariate data; (2) bivariate data and tests for independence; and (3) two samples and likelihood ratios. It is proposed that bi-information estimation of a density function can be developed by analogy to the problem of identification of regression models.
A conditional likelihood is required to estimate the selection coefficient in ancient DNA
Valleriani, Angelo
2016-08-01
Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct.
A conditional likelihood is required to estimate the selection coefficient in ancient DNA.
Valleriani, Angelo
2016-08-16
Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct.
Thresholded Lasso for high dimensional variable selection and statistical estimation
Zhou, Shuheng
2010-01-01
Given $n$ noisy samples with $p$ dimensions, where $n \\ll p$, we show that the multi-step thresholding procedure based on the Lasso -- we call it the {\\it Thresholded Lasso}, can accurately estimate a sparse vector $\\beta \\in \\R^p$ in a linear model $Y = X \\beta + \\epsilon$, where $X$ is an $n \\times p$ design matrix, and $\\epsilon \\sim N(0, \\sigma^2 I_n)$. We show that under the restricted eigenvalue (RE) condition (Bickel-Ritov-Tsybakov 09), it is possible to achieve the $\\ell_2$ loss within a logarithmic factor of the ideal mean square error one would achieve with an {\\em oracle} while selecting a sufficiently sparse model -- hence achieving {\\it sparse oracle inequalities}; the oracle would supply perfect information about which coordinates are non-zero and which are above the noise level. In some sense, the Thresholded Lasso recovers the choices that would have been made by the $\\ell_0$ penalized least squares estimators, in that it selects a sufficiently sparse model without sacrificing the accuracy in ...
Anna Rudawska
2017-06-01
Full Text Available The following paper analyses selected problems regarding the impact of technological parameters and type of adherend material on the strength of adhesive-bonded steel sheet joints. The subject of the test was a single-lap adhesive joint of S235JR steel sheet. Joints were formed on two types of substrates: with or without corrosion products on the surface. The surface of steel sheet adherends was pre-treated with three cleaning solutions: acetone, Wiko industrial degreasing agent and Cortanin F anti-corrosion agent, depend-ing on the state of the surface. Adhesive joints were formed with Epidian 53/ET/100:15 epoxy adhesive. The formed joints were subjected to one of three ageing variants: 14 days, two months and 3 months, which were followed by destructive testing to determine the shear strength of joints. The analysis of results ob-tained in tests indicates that the strength performance of adhesive joints of corrosion-free adherends was characterised by higher values than in corroded steel sheets, regardless of ageing time.
Prey size selection and distance estimation in foraging adult dragonflies.
Olberg, R M; Worthington, A H; Fox, J L; Bessette, C E; Loosemore, M P
2005-09-01
To determine whether perching dragonflies visually assess the distance to potential prey items, we presented artificial prey, glass beads suspended from fine wires, to perching dragonflies in the field. We videotaped the responses of freely foraging dragonflies (Libellula luctuosa and Sympetrum vicinum-Odonata, suborder Anisoptera) to beads ranging from 0.5 mm to 8 mm in diameter, recording whether or not the dragonflies took off after the beads, and if so, at what distance. Our results indicated that dragonflies were highly selective for bead size. Furthermore, the smaller Sympetrum preferred beads of smaller size and the larger Libellula preferred larger beads. Each species rejected beads as large or larger than their heads, even when the beads subtended the same visual angles as the smaller, attractive beads. Since bead size cannot be determined without reference to distance, we conclude that dragonflies are able to estimate the distance to potential prey items. The range over which they estimate distance is about 1 m for the larger Libellula and 70 cm for the smaller Sympetrum. The mechanism of distance estimation is unknown, but it probably includes both stereopsis and the motion parallax produced by head movements.
Weber, Marco; Ruch, Willibald
2012-01-01
The present study investigated the role of 24 character strengths in 87 adolescent romantic relationships focusing on their role in partner selection and their role in mates' life satisfaction. Measures included the Values in Action Inventory of Strengths for Youth, the Students' Life Satisfaction Scale, and an Ideal Partner Profiler for the…
Weber, Marco; Ruch, Willibald
2012-01-01
The present study investigated the role of 24 character strengths in 87 adolescent romantic relationships focusing on their role in partner selection and their role in mates' life satisfaction. Measures included the Values in Action Inventory of Strengths for Youth, the Students' Life Satisfaction Scale, and an Ideal Partner Profiler for the…
Selection of High Strength Encapsulant for MEMS Devices Undergoing High Pressure Packaging
Hamzah, A A; Husaini, Y; Majlis, B Y; Ahmad, I
2008-01-01
Deflection behavior of several encapsulant materials under uniform pressure was studied to determine the best encapsulant for MEMS device. Encapsulation is needed to protect movable parts of MEMS devices during high pressure transfer molded packaging process. The selected encapsulant material has to have surface deflection of less than 5 ?m under 100 atm vertical loading. Deflection was simulated using CoventorWare ver.2005 software and verified with calculation results obtained using shell bending theory. Screening design was used to construct a systematic approach for selecting the best encapsulant material and thickness under uniform pressure up to 100 atm. Materials considered for this study were polyimide, parylene C and carbon based epoxy resin. It was observed that carbon based epoxy resin has deflection of less than 5 ?m for all thickness and pressure variations. Parylene C is acceptable and polyimide is unsuitable as high strength encapsulant. Carbon based epoxy resin is considered the best encapsula...
Fatigue strength of Co-Cr-Mo alloy clasps prepared by selective laser melting.
Kajima, Yuka; Takaichi, Atsushi; Nakamoto, Takayuki; Kimura, Takahiro; Yogo, Yoshiaki; Ashida, Maki; Doi, Hisashi; Nomura, Naoyuki; Takahashi, Hidekazu; Hanawa, Takao; Wakabayashi, Noriyuki
2016-06-01
We aimed to investigate the fatigue strength of Co-Cr-Mo clasps for removable partial dentures prepared by selective laser melting (SLM). The Co-Cr-Mo alloy specimens for tensile tests (dumbbell specimens) and fatigue tests (clasp specimens) were prepared by SLM with varying angles between the building and longitudinal directions (i.e., 0° (TL0, FL0), 45° (TL45, FL45), and 90° (TL90, FL90)). The clasp specimens were subjected to cyclic deformations of 0.25mm and 0.50mm for 10(6) cycles. The SLM specimens showed no obvious mechanical anisotropy in tensile tests and exhibited significantly higher yield strength and ultimate tensile strength than the cast specimens under all conditions. In contrast, a high degree of anisotropy in fatigue performance associated with the build orientation was found. For specimens under the 0.50mm deflection, FL90 exhibited significantly longer fatigue life (205,418 cycles) than the cast specimens (112,770 cycles). In contrast, the fatigue lives of FL0 (28,484 cycles) and FL45 (43,465 cycles) were significantly shorter. The surface roughnesses of FL0 and FL45 were considerably higher than those of the cast specimens, whereas there were no significant differences between FL90 and the cast specimens. Electron backscatter diffraction (EBSD) analysis indicated the grains of FL0 showed preferential close to orientation of the γ phase along the normal direction to the fracture surface. In contrast, the FL45 and FL90 grains showed no significant preferential orientation. Fatigue strength may therefore be affected by a number of factors, including surface roughness and crystal orientation. The SLM process is a promising candidate for preparing tough removable partial denture frameworks, as long as the appropriate build direction is adopted.
Andersen, Stine; Frederiksen, Katrine Diemer; Hansen, Stinus;
2014-01-01
Obesity is associated with high bone mineral density (BMD), but whether obesity-related higher bone mass increases bone strength and thereby protect against fractures is uncertain. We estimated effects of obesity on bone microarchitecture and estimated strength in 36 patients (12 males and 24...... females, age 25-56 years and BMI 33.2-57.6 kg/m(2)) matched with healthy controls (age 25-54 years and BMI 19.5-24.8 kg/m(2)) in regard to gender, menopausal status, age (±6 years) and height (±6 cm) using high resolution peripheral quantitative computed tomography and dual energy X-ray absorptiometry...
Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach
Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio
2015-01-01
This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447–2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8–30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics. PMID:26452043
Using coronal seismology to estimate the magnetic field strength in a realistic coronal model
Chen, Feng
2015-01-01
Coronal seismology is extensively used to estimate properties of the corona, e.g. the coronal magnetic field strength are derived from oscillations observed in coronal loops. We present a three-dimensional coronal simulation including a realistic energy balance in which we observe oscillations of a loop in synthesised coronal emission. We use these results to test the inversions based on coronal seismology. From the simulation of the corona above an active region we synthesise extreme ultraviolet (EUV) emission from the model corona. From this we derive maps of line intensity and Doppler shift providing synthetic data in the same format as obtained from observations. We fit the (Doppler) oscillation of the loop in the same fashion as done for observations to derive the oscillation period and damping time. The loop oscillation seen in our model is similar to imaging and spectroscopic observations of the Sun. The velocity disturbance of the kink oscillation shows an oscillation period of 52.5s and a damping tim...
Cyclotron Lines: From Magnetic Field Strength Estimators to Geometry Tracers in Neutron Stars
Chandreyee Maitra
2017-09-01
With forty years since the discovery of the first cyclotron line in Her X-1, there have been remarkable advancements in the field related to the study of the physics of accreting neutron stars – cyclotron lines have been a major torchbearer in this regard, from being the only direct estimator of the magnetic field strength, a tracer of accretion geometry and an indicator of the emission beam in these systems. The main flurry of activities have centred around studying the harmonic separations, luminosity dependence, pulse phase dependence and more recently, the shapes of the line and the trend for long-term evolution in the line energy. This article visits the important results related to cyclotron lines since its discovery and reviews their significance. An emphasis is laid on pulse phase resolved spectroscopy and the important clues a joint timing and spectral study in this context can provide, to build a complete picture for the physics of accretion and hence X-ray emission in accreting neutron stars.
Thompson, K A; Cory, K A; Johnson, M T J
2017-06-01
Evolutionary biologists have long sought to understand the ecological processes that generate plant reproductive diversity. Recent evidence indicates that constitutive antiherbivore defences can alter natural selection on reproductive traits, but it is unclear whether induced defences will have the same effect and whether reduced foliar damage in defended plants is the cause of this pattern. In a factorial field experiment using common milkweed, Asclepias syriaca L., we induced plant defences using jasmonic acid (JA) and imposed foliar damage using scissors. We found that JA-induced plants experienced selection for more inflorescences that were smaller in size (fewer flowers), whereas control plants only experienced a trend towards selection for larger inflorescences (more flowers); all effects were independent of foliar damage. Our results demonstrate that induced defences can alter both the strength and direction of selection on reproductive traits, and suggest that antiherbivore defences may promote the evolution of plant reproductive diversity. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.
Krueger, O.; Ebinghaus, R.; Kock, H.H.; Richter-Politz, I.; Geilhufe, C.
1998-12-31
Anthropogenic emission sources of gaseous mercury at the contaminated industrial site BSL Werk Schkopau have been determined by measurements and numerical modelling applying a local dispersion model. The investigations are based on measurements from several field campaigns in the period of time between December 1993 and June 1994. The estimation of the source strengths was performed by inverse modelling using measurements as constraints for the dispersion model. Model experiments confirmed the applicability of the inverse modelling procedure for the source strength estimation at BSL Werk Schkopau. At the factory premises investigated, the source strengths of four source areas, among them three closed chlor-alkali productions, one partly removed acetaldehyde factory and additionaly one still producing chlor-alkali factory have been identified with an approximate total gaseous mercury emission of lower than 2.5 kg/day. (orig.)
Magnetic Field Feature Extraction and Selection for Indoor Location Estimation
Carlos E. Galván-Tejada
2014-06-01
Full Text Available User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user’s location (sensitivity and its capacity to detect false positives (specificity in both scenarios.
VSRR - Quarterly provisional estimates for selected indicators of mortality
U.S. Department of Health & Human Services — Provisional estimates of death rates. Estimates are presented for each of the 15 leading causes of death plus estimates for deaths attributed to drug overdose, falls...
Kumar, Sudhir; Datta, D; Sharma, S D; Chourasiya, G; Babu, D A R; Sharma, D N
2014-04-01
Verification of the strength of high dose rate (HDR) (192)Ir brachytherapy sources on receipt from the vendor is an important component of institutional quality assurance program. Either reference air-kerma rate (RAKR) or air-kerma strength (AKS) is the recommended quantity to specify the strength of gamma-emitting brachytherapy sources. The use of Farmer-type cylindrical ionization chamber of sensitive volume 0.6 cm(3) is one of the recommended methods for measuring RAKR of HDR (192)Ir brachytherapy sources. While using the cylindrical chamber method, it is required to determine the positioning error of the ionization chamber with respect to the source which is called the distance error. An attempt has been made to apply the fuzzy set theory to estimate the subjective uncertainty associated with the distance error. A simplified approach of applying this fuzzy set theory has been proposed in the quantification of uncertainty associated with the distance error. In order to express the uncertainty in the framework of fuzzy sets, the uncertainty index was estimated and was found to be within 2.5%, which further indicates that the possibility of error in measuring such distance may be of this order. It is observed that the relative distance li estimated by analytical method and fuzzy set theoretic approach are consistent with each other. The crisp values of li estimated using analytical method lie within the bounds computed using fuzzy set theory. This indicates that li values estimated using analytical methods are within 2.5% uncertainty. This value of uncertainty in distance measurement should be incorporated in the uncertainty budget, while estimating the expanded uncertainty in HDR (192)Ir source strength measurement.
无
2002-01-01
In the present study, a modified Hall-Petch correlation on the basis of dislocation pile-up model was used to estimate the yield strength of SiCp/Al composites. The experimental results show that the modified Hall-Petch correlation expressed as σcy=244+371λ-1/2 fits very well with the experimental data, which indicated that the strength increase of SiCp/Al composites might be due to the direct blocking of dislocation motion by the particulate-matrix interface,namely, the dislocation pile-up is the most possible strengthening mechanism for SiCp/Al composites.
Evolutionary games in a generalized Moran process with arbitrary selection strength and mutation
Quan Jia; Wang Xian-Jia
2011-01-01
By using a generalized fitness-dependent Moran process, an evolutionary model for symmetric 2×2 games in a well-mixed population with a finite size is investigated. In the model, the individuals' payoff accumulating from games is mapped into fitness using an exponent function. Both selection strength β and mutation rate ε are considered. The process is an ergodic birth-death process. Based on the limit distribution of the process, we give the analysis results for which strategy will be favoured when e is small enough. The results depend on not only the payoff matrix of the game, but also on the population size. Especially, we prove that natural selection favours the strategy which is risk-dominant when the population size is large enough. For arbitrary β and ε values, the 'Hawk-Dove' game and the 'Coordinate' game are used to illustrate our model. We give the evolutionary stable strategy (ESS) of the games and compare the results with those of the replicator dynamics in the infinite population. The results are determined by simulation experiments.
The effect of selection on genetic parameter estimates
Unknown
This is ascribed to the fact that no selection was exercised during this period. .... The decrease in years 5-6 under the non-selection scenario is ascribed to ..... selection process and distribution of selection criteria (Henderson, 1975; Im, 1989; ...
Hann, Damjan
2016-09-01
This study presents an innovative approach for determining the unconfined yield strength σc during the excavation of coal from the earth's crust by using an equipment that was developed for measuring the mechanical properties of bulk materials stored in silos. Highly productive excavation of coal with a hanging wall top caving leads to intensive deformations in the hanging wall and the broken coal can be considered as bulk material. In this research, the shear tester Johanson Hang-Up Indicizer was used to measure the unconfined yield strength of the tested samples, even though such a tester cannot produce stress-strain conditions similar to those occurring during the excavation. An attempt was made to estimate the real unconfined yield strength of broken coal deep under the surface through a combination of measured data and extrapolation.
Bilgehan, Mahmut
2011-03-01
In this paper, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN) model have been successfully used for the evaluation of relationships between concrete compressive strength and ultrasonic pulse velocity (UPV) values using the experimental data obtained from many cores taken from different reinforced concrete structures having different ages and unknown ratios of concrete mixtures. A comparative study is made using the neural nets and neuro-fuzzy (NF) techniques. Statistic measures were used to evaluate the performance of the models. Comparing of the results, it is found that the proposed ANFIS architecture with Gaussian membership function is found to perform better than the multilayer feed-forward ANN learning by backpropagation algorithm. The final results show that especially the ANFIS modelling may constitute an efficient tool for prediction of the concrete compressive strength. Architectures of the ANFIS and neural network established in the current study perform sufficiently in the estimation of concrete compressive strength, and particularly ANFIS model estimates closely follow the desired values. Both ANFIS and ANN techniques can be used in conditions where too many structures are to be examined in a restricted time. The presented approaches enable to practically find concrete strengths in the existing reinforced concrete structures, whose records of concrete mixture ratios are not available or present. Thus, researchers can easily evaluate the compressive strength of concrete specimens using UPV and density values. These methods also contribute to a remarkable reduction in the computational time without any significant loss of accuracy. A comparison of the results clearly shows that particularly the NF approach can be used effectively to predict the compressive strength of concrete using UPV and density values. In addition, these model architectures can be used as a nondestructive procedure for health monitoring of
Kielgast, Mathias Rønholt; Rasmussen, Anders Charly; Laursen, Mathias Hjorth
2016-01-01
This letter presents an experimental study and a novel modelling approach of the wireless channel of smart utility meters placed in basements or sculleries. The experimental data consist of signal strength measurements of consumption report packets. Since such packets are only registered if they ......This letter presents an experimental study and a novel modelling approach of the wireless channel of smart utility meters placed in basements or sculleries. The experimental data consist of signal strength measurements of consumption report packets. Since such packets are only registered...... if they can be decoded by the receiver, the part of the signal strength distribution that falls below the receiver sensitivity threshold is not observable. We combine a Rician fading model with a bias function that captures the cut-off in the observed signal strength measurements. Two sets of experimental...... data are analysed. It is shown that the proposed method offers an approximation of the distribution of the signal strength measurements that is better than a naïve Rician fitting....
Engineer, Navzer D; Percaccio, Cherie R; Pandya, Pritesh K; Moucha, Raluca; Rathbun, Daniel L; Kilgard, Michael P
2004-07-01
Over the last 50 yr, environmental enrichment has been shown to generate more than a dozen changes in brain anatomy. The consequences of these physical changes on information processing have not been well studied. In this study, rats were housed in enriched or standard conditions either prior to or after reaching sexual maturity. Evoked potentials from awake rats and extracellular recordings from anesthetized rats were used to document responses of auditory cortex neurons. This report details several significant, new findings about the influence of housing conditions on the responses of rat auditory cortex neurons. First, enrichment dramatically increases the strength of auditory cortex responses. Tone-evoked potentials of enriched rats, for example, were more than twice the amplitude of rats raised in standard laboratory conditions. Second, cortical responses of both young and adult animals benefit from exposure to an enriched environment and are degraded by exposure to an impoverished environment. Third, housing condition resulted in rapid remodeling of cortical responses in <2 wk. Fourth, recordings made under anesthesia indicate that enrichment increases the number of neurons activated by any sound. This finding shows that the evoked potential plasticity documented in awake rats was not due to differences in behavioral state. Finally, enrichment made primary auditory cortex (A1) neurons more sensitive to quiet sounds, more selective for tone frequency, and altered their response latencies. These experiments provide the first evidence of physiologic changes in auditory cortex processing resulting from generalized environmental enrichment.
Quantification of soil physical properties has traditionally been through soil sampling and laboratory analyses, which is time-, cost-, and labor-consuming, making it difficult to obtain the spatially-dense data required for precision agriculture. Soil strength and apparent electrical conductivity (...
Forgetting the Once-Seen Face: Estimating the Strength of an Eyewitness's Memory Representation
Deffenbacher, Kenneth A.; Bornstein, Brian H.; McGorty, E. Kiernan; Penrod, Steven D.
2008-01-01
The fidelity of an eyewitness's memory representation is an issue of paramount forensic concern. Psychological science has been unable to offer more than vague generalities concerning the relation of retention interval to memory trace strength for the once-seen face. A meta-analysis of 53 facial memory studies produced a highly reliable…
Loturco, Irineu; Artioli, Guilherme Giannini; Kobal, Ronaldo; Gil, Saulo; Franchini, Emerson
2014-07-01
This study investigated the relationship between punching acceleration and selected strength and power variables in 19 professional karate athletes from the Brazilian National Team (9 men and 10 women; age, 23 ± 3 years; height, 1.71 ± 0.09 m; and body mass [BM], 67.34 ± 13.44 kg). Punching acceleration was assessed under 4 different conditions in a randomized order: (a) fixed distance aiming to attain maximum speed (FS), (b) fixed distance aiming to attain maximum impact (FI), (c) self-selected distance aiming to attain maximum speed, and (d) self-selected distance aiming to attain maximum impact. The selected strength and power variables were as follows: maximal dynamic strength in bench press and squat-machine, squat and countermovement jump height, mean propulsive power in bench throw and jump squat, and mean propulsive velocity in jump squat with 40% of BM. Upper- and lower-body power and maximal dynamic strength variables were positively correlated to punch acceleration in all conditions. Multiple regression analysis also revealed predictive variables: relative mean propulsive power in squat jump (W·kg-1), and maximal dynamic strength 1 repetition maximum in both bench press and squat-machine exercises. An impact-oriented instruction and a self-selected distance to start the movement seem to be crucial to reach the highest acceleration during punching execution. This investigation, while demonstrating strong correlations between punching acceleration and strength-power variables, also provides important information for coaches, especially for designing better training strategies to improve punching speed.
Estimates of External Validity Bias When Impact Evaluations Select Sites Nonrandomly
Bell, Stephen H.; Olsen, Robert B.; Orr, Larry L.; Stuart, Elizabeth A.
2016-01-01
Evaluations of educational programs or interventions are typically conducted in nonrandomly selected samples of schools or districts. Recent research has shown that nonrandom site selection can yield biased impact estimates. To estimate the external validity bias from nonrandom site selection, we combine lists of school districts that were…
Cheung, Angela M; Majumdar, Sharmila; Brixen, Kim; Chapurlat, Roland; Fuerst, Thomas; Engelke, Klaus; Dardzinski, Bernard; Cabal, Antonio; Verbruggen, Nadia; Ather, Shabana; Rosenberg, Elizabeth; de Papp, Anne E
2014-08-01
The cathepsin K inhibitor odanacatib (ODN), currently in phase 3 development for postmenopausal osteoporosis, has a novel mechanism of action that reduces bone resorption while maintaining bone formation. In phase 2 studies, odanacatib increased areal bone mineral density (aBMD) at the lumbar spine and total hip progressively over 5 years. To determine the effects of ODN on cortical and trabecular bone and estimate changes in bone strength, we conducted a randomized, double-blind, placebo-controlled trial, using both quantitative computed tomography (QCT) and high-resolution peripheral (HR-p)QCT. In previously published results, odanacatib was superior to placebo with respect to increases in trabecular volumetric BMD (vBMD) and estimated compressive strength at the spine, and integral and trabecular vBMD and estimated strength at the hip. Here, we report the results of HR-pQCT assessment. A total of 214 postmenopausal women (mean age 64.0 ± 6.8 years and baseline lumbar spine T-score -1.81 ± 0.83) were randomized to oral ODN 50 mg or placebo, weekly for 2 years. With ODN, significant increases from baseline in total vBMD occurred at the distal radius and tibia. Treatment differences from placebo were also significant (3.84% and 2.63% for radius and tibia, respectively). At both sites, significant differences from placebo were also found in trabecular vBMD, cortical vBMD, cortical thickness, cortical area, and strength (failure load) estimated using finite element analysis of HR-pQCT scans (treatment differences at radius and tibia = 2.64% and 2.66%). At the distal radius, odanacatib significantly improved trabecular thickness and bone volume/total volume (BV/TV) versus placebo. At a more proximal radial site, odanacatib attenuated the increase in cortical porosity found with placebo (treatment difference = -7.7%, p = 0.066). At the distal tibia, odanacatib significantly improved trabecular number, separation, and BV/TV versus placebo. Safety
How Metastrategic Considerations Influence the Selection of Frequency Estimation Strategies
Brown, Norman R.
2008-01-01
Prior research indicates that enumeration-based frequency estimation strategies become increasingly common as memory for relevant event instances improves and that moderate levels of context memory are associated with moderate rates of enumeration [Brown, N. R. (1995). Estimation strategies and the judgment of event frequency. Journal of…
M. S. Lorrain
Full Text Available Quality control of structural concrete has been conducted for several decades based mainly on the results of axial compression tests. This kind of test, although widely used, is not exempt from errors and has some considerable drawbacks that may affect its reliability, such as the need for appropriate and careful specimen conditioning and adoption of adequate capping techniques. For these reasons, it would be useful to have complementary or alternative ways to check compressive strength, in order to improve concrete quality control. The use of a bond test to monitor concrete strength is being proposed by an international group of researchers from France, Tunisia and Brazil as a potential means to this end. Given the fact that the link between bond resistance and concrete strength is already well established, this type of test seems to be a viable alternative to traditional methods. Nonetheless, to check if the underlying principle is sound when used in different circumstances, the group has been gathering data from several studies conducted by different researchers in various countries, with distinct concretes and rebar types. An analysis of the data collected shows that there is a clear and strong correlation between bond resistance and compressive strength, no matter the influence of other variables. This result validates the basic idea of using an Appropriate Pull-Out (APULOT bond test to assess concrete strength. If the general principle is valid for random data obtained from different studies, the definition of a clear and appropriate test will probably lead to the reduction of experimental noise and increase the precision of the strength estimates obtained using this method.
Numerical Model based Reliability Estimation of Selective Laser Melting Process
Mohanty, Sankhya; Hattel, Jesper Henri
2014-01-01
Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....
Shanbhogue, Vikram V; Hansen, Stinus; Jørgensen, Niklas Rye; Brixen, Kim; Gravholt, Claus H
2014-11-01
Although the expected skeletal manifestations of testosterone deficiency in Klinefelter's syndrome (KS) are osteopenia and osteoporosis, the structural basis for this is unclear. The aim of this study was to assess bone geometry, volumetric bone mineral density (vBMD), microarchitecture, and estimated bone strength using high-resolution peripheral quantitative computed tomography (HR-pQCT) in patients with KS. Thirty-one patients with KS confirmed by lymphocyte chromosome karyotyping aged 35.8 ± 8.2 years were recruited consecutively from a KS outpatient clinic and matched with respect to age and height with 31 healthy subjects aged 35.9 ± 8.2 years. Dual-energy X-ray absorptiometry (DXA) and HR-pQCT were performed in all participants, and blood samples were analyzed for hormonal status and bone biomarkers in KS patients. Twenty-one KS patients were on long-term testosterone-replacement therapy. In weight-adjusted models, HR-pQCT revealed a significantly lower cortical area (p < 0.01), total and trabecular vBMD (p = 0.02 and p = 0.04), trabecular bone volume fraction (p = 0.04), trabecular number (p = 0.05), and estimates of bone strength, whereas trabecular spacing was higher (p = 0.03) at the tibia in KS patients. In addition, cortical thickness was significantly reduced, both at the radius and tibia (both p < 0.01). There were no significant differences in indices of bone structure, estimated bone strength, or bone biomarkers in KS patients with and without testosterone therapy. This study showed that KS patients had lower total vBMD and a compromised trabecular compartment with a reduced trabecular density and bone volume fraction at the tibia. The compromised trabecular network integrity attributable to a lower trabecular number with relative preservation of trabecular thickness is similar to the picture found in women with aging. KS patients also displayed a reduced cortical area and thickness at the tibia, which in
1982-06-01
rules of the road quizzes and essays addressing the need for the Soviet naval officer to be a 7 0 proficient shiphandler . In this area, Adm. Gorshkov...RD-A12l 773 AN ESTIMATE OF SOME STRENGTHS AND WEAKNESSES OF THE 1/2 SOVIET NAVAL OFFICER..(U) NAVAL POSTGRADUATE SCHOOL UNCLSSIIED MONTEREY CA RN...WMAGGI JUN 82 FG59 N EM2 MICROCOPY RESOLUTION TEST CHART NAIIONAI. BUREAU OF STANDARDS_ 193-A NAVAL POSTGRADUATE SCHOOL Monterey, California ". 1 14
Raja, K Sasikumar; Hariharan, K; Kathiravan, C; Wang, T J
2016-01-01
We report ground based, low frequency heliograph (80 MHz), spectral (85-35 MHz) and polarimeter (80 and 40 MHz) observations of drifting, non-thermal radio continuum associated with the `halo' coronal mass ejection (CME) that occurred in the solar atmosphere on 2013 March 15. The magnetic field strengths ($B$) near the radio source were estimated to be $B \\approx 2.2 \\pm 0.4$ G at 80 MHz and $B \\approx 1.4 \\pm 0.2$ G at 40 MHz. The corresponding radial distances ($r$) are $r \\approx 1.9~R_{\\odot}$ (80 MHz) and $r \\approx 2.2~R_{\\odot}$ (40 MHz).
On the influence of crack closure on strength estimates of wood
Nielsen, Lauge Fuglsang
2004-01-01
Three well-known duration of load models (Gerhard, Barrett/Foschi, DVM) are considered in this note with respect to their ability to predict lifetime of wood subjected to harmonically varying loads. The result obtained is that they practically predict the same lifetime—which for low frequency...... loading can be considered approximately true. For higher frequencies, however, this result can be far too overestimated. The reason is that the models considered do not take into account the effect of the crack closure phenomenon (which are the main mechanisms of energy dissipation causing fatigue failure...... in metals).It is suggested that any of the simple models can be used in practice when low frequency load variations are considered. The DVM model, however, should be preferred because of its ability to predict residual strength, and because of its ‘build in’ flexibility with respect to wood quality...
International activities in HF sky-wave field-strength estimation (period 1956-1991
P. A. Bradley
1998-06-01
Full Text Available Methods for the determination of the strengths of radio signals reflected from the ionosphere and propagated to distant locations are required for service planning and circuit operation. Efforts are described following World War II to arrive at agreed procedures and some of the features of the various empirical prediction methods that have been formulated over the years are discussed. The problems are highlighted of determining a "best" method from among those available. Measurement data collected for this purpose are reviewed and attention is drawn to their limitations of accuracy and coverage. Even comparison of predicted and measured values is not straightforward, and the techniques that have been developed to do this are considered.
Strasters, J K; Breyer, E D; Rodgers, A H; Khaledi, M G
1990-07-06
Previously, the simultaneous enhancement of separation selectivity with elution strength was reported in micellar liquid chromatography (MLC) using the hybrid eluents of water-organic solvent-micelles. The practical implication of this phenomenon is that better separations can be achieved in shorter analysis times by using the hybrid eluents. Since both micelle concentration and volume fraction of organic modifier influence selectivity and solvent strength, only an investigation of the effects of a simultaneous variation of these parameters will disclose the full separation capability of the method, i.e. the commonly used sequential solvent optimization approach of adjusting the solvent strength first and then improving selectivity in reversed-phase liquid chromatography is inefficient for the case of MLC with the hybrid eluents. This is illustrated in this paper with two examples: the optimization of the selectivity in the separation of a mixture of phenols and the optimization of a resolution-based criterion determined for the separation of a number of amino acids and small peptides. The large number of variables involved in the separation process in MLC necessitates a structured approach in the development of practical applications of this technique. A regular change in retention behavior is observed with the variation of the surfactant concentration and the concentration of organic modifier, which enables a successful prediction of retention times. Consequently interpretive optimization strategies such as the interative regression method are applicable.
Selection of the Linear Regression Model According to the Parameter Estimation
无
2000-01-01
In this paper, based on the theory of parameter estimation, we give a selection method and ,in a sense of a good character of the parameter estimation,we think that it is very reasonable. Moreover,we offera calculation method of selection statistic and an applied example.
Estimation of time-varying selectivity in stock assessments using state-space models
Nielsen, Anders; Berg, Casper Willestofte
2014-01-01
-varying selectivity pattern. The fishing mortality rates are considered (possibly correlated) stochastic processes, and the corresponding process variances are estimated within the model. The model is applied to North Sea cod and it is verified from simulations that time-varying selectivity can be estimated...
Alessandro Barbiero
2014-01-01
Full Text Available In many statistical applications, it is often necessary to obtain an interval estimate for an unknown proportion or probability or, more generally, for a parameter whose natural space is the unit interval. The customary approximate two-sided confidence interval for such a parameter, based on some version of the central limit theorem, is known to be unsatisfactory when its true value is close to zero or one or when the sample size is small. A possible way to tackle this issue is the transformation of the data through a proper function that is able to make the approximation to the normal distribution less coarse. In this paper, we study the application of several of these transformations to the context of the estimation of the reliability parameter for stress-strength models, with a special focus on Poisson distribution. From this work, some practical hints emerge on which transformation may more efficiently improve standard confidence intervals in which scenarios.
The importance of spatial models for estimating the strength of density dependence
Thorson, James T.; Skaug, Hans J.; Kristensen, Kasper;
2014-01-01
Identifying the existence and magnitude of density dependence is one of the oldest concerns in ecology. Ecologists have aimed to estimate density dependence in population and community data by fitting a simple autoregressive (Gompertz) model for density dependence to time series of abundance...... for an entire population. However, it is increasingly recognized that spatial heterogeneity in population densities has implications for population and community dynamics. We therefore adapt the Gompertz model to approximate local densities over continuous space instead of population-wide abundance......, and to allow productivity to vary spatially. Using simulated data generated from a spatial model, we show that the conventional (nonspatial) Gompertz model will result in biased estimates of density dependence, e.g., identifying oscillatory dynamics when not present. By contrast, the spatial Gompertz model...
Zhang, Hu; Zhu, Haihong; Nie, Xiaojia; Qi, Ting; Hu, Zhiheng; Zeng, Xiaoyan
2016-04-01
The proposed paper illustrates the fabrication and heat treatment of high strength Al-Cu-Mg alloy produced by selective laser melting (SLM) process. Al-Cu-Mg alloy is one of the heat treatable aluminum alloys regarded as difficult to fusion weld. SLM is an additive manufacturing technique through which components are built by selectively melting powder layers with a focused laser beam. The process is characterized by short laser-powder interaction times and localized high heat input, which leads to steep thermal gradients, rapid solidification and fast cooling. In this research, 3D Al-Cu-Mg parts with relative high density of 99.8% are produced by SLM from gas atomized powders. Room temperature tensile tests reveal a remarkable mechanical behavior: the samples show yield and tensile strengths of about 276 MPa and 402 MPa, respectively, along with fracture strain of 6%. The effect of solution treatment on microstructure and related tensile properties is examined and the results demonstrate that the mechanical behavior of the SLMed Al-Cu-Mg samples can be greatly enhanced through proper heat treatment. After T4 solution treatment at 540°C, under the effect of precipitation strengthening, the tensile strength and the yield strength increase to 532 MPa and 338 MPa, respectively, and the elongation increases to 13%.
Ozturk, H.; Altinpinar, M.
2017-07-01
The point load (PL) test is generally used for estimation of uniaxial compressive strength (UCS) of rocks because of its economic advantages and simplicity in testing. If the PL index of a specimen is known, the UCS can be estimated using conversion factors. Several conversion factors have been proposed by various researchers and they are dependent upon the rock type. In the literature, conversion factors on different sedimentary, igneous and metamorphic rocks can be found, but no study exists on trona. In this study, laboratory UCS and field PL tests were carried out on trona and interbeds of volcano-sedimentary rocks. Based on these tests, PL to UCS conversion factors of trona and interbeds are proposed. The tests were modeled numerically using a distinct element method (DEM) software, particle flow code (PFC), in an attempt to guide researchers having various types of modeling problems (excavation, cavern design, hydraulic fracturing, etc.) of the abovementioned rock types. Average PFC parallel bond contact model micro properties for the trona and interbeds were determined within this study so that future researchers can use them to avoid the rigorous PFC calibration procedure. It was observed that PFC overestimates the tensile strength of the rocks by a factor that ranges from 22 to 106.
Cudeck, Robert
1991-01-01
Two algorithms that automatically select subsets of variables (PACE algorithm) and reference variables (Fabin estimators), respectively, used for the noniterative estimators are presented. The PACE algorithm is based on a nonsymmetric matrix sweep operator. A Monte Carlo experiment compares the relative performance of these estimators and others.…
On using sample selection methods in estimating the price elasticity of firms' demand for insurance.
Marquis, M Susan; Louis, Thomas A
2002-01-01
We evaluate a technique based on sample selection models that has been used by health economists to estimate the price elasticity of firms' demand for insurance. We demonstrate that, this technique produces inflated estimates of the price elasticity. We show that alternative methods lead to valid estimates.
Feature Subset Selection by Estimation of Distribution Algorithms
Cantu-Paz, E
2002-01-17
This paper describes the application of four evolutionary algorithms to the identification of feature subsets for classification problems. Besides a simple GA, the paper considers three estimation of distribution algorithms (EDAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the EDAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a Naive Bayes classifier and public-domain and artificial data sets. In contrast with previous studies, we did not find evidence to support or reject the use of EDAs for this problem.
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-01-01
Design spaces for multiple dose strengths of tablets were constructed using a Bayesian estimation method with one set of design of experiments (DoE) of only the highest dose-strength tablet. The lubricant blending process for theophylline tablets with dose strengths of 100, 50, and 25 mg is used as a model manufacturing process in order to construct design spaces. The DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) for theophylline 100-mg tablet. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) of the 100-mg tablet were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. Three experiments under an optimal condition and two experiments under other conditions were performed using 50- and 25-mg tablets, respectively. The response surfaces of the highest-strength tablet were corrected to those of the lower-strength tablets by Bayesian estimation using the manufacturing data of the lower-strength tablets. Experiments under three additional sets of conditions of lower-strength tablets showed that the corrected design space made it possible to predict the quality of lower-strength tablets more precisely than the design space of the highest-strength tablet. This approach is useful for constructing design spaces of tablets with multiple strengths.
[Selection of biomass estimation models for Chinese fir plantation].
Li, Yan; Zhang, Jian-guo; Duan, Ai-guo; Xiang, Cong-wei
2010-12-01
A total of 11 kinds of biomass models were adopted to estimate the biomass of single tree and its organs in young (7-year-old), middle-age (16-year-old), mature (28-year-old), and mixed-age Chinese fir plantations. There were totally 308 biomass models fitted. Among the 11 kinds of biomass models, power function models fitted best, followed by exponential models, and then polynomial models. Twenty-one optimal biomass models for individual organ and single tree were chosen, including 18 models for individual organ and 3 models for single tree. There were 7 optimal biomass models for the single tree in the mixed-age plantation, containing 6 for individual organ and 1 for single tree, and all in the form of power function. The optimal biomass models for the single tree in different age plantations had poor generality, but the ones for that in mixed-age plantation had a certain generality with high accuracy, which could be used for estimating the biomass of single tree in different age plantations. The optimal biomass models for single Chinese fir tree in Shaowu of Fujin Province were used to predict the single tree biomass in mature (28-year-old) Chinese fir plantation in Jiangxi Province, and it was found that the models based on a large sample of forest biomass had a relatively high accuracy, being able to be applied in large area, whereas the regional models with small sample were limited to small area.
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
Simon, Donald L.; Rinehart, Aidan W.
2016-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
Austen, Emily J; Weis, Arthur E
2016-02-24
Our understanding of selection through male fitness is limited by the resource demands and indirect nature of the best available genetic techniques. Applying complementary, independent approaches to this problem can help clarify evolution through male function. We applied three methods to estimate selection on flowering time through male fitness in experimental populations of the annual plant Brassica rapa: (i) an analysis of mating opportunity based on flower production schedules, (ii) genetic paternity analysis, and (iii) a novel approach based on principles of experimental evolution. Selection differentials estimated by the first method disagreed with those estimated by the other two, indicating that mating opportunity was not the principal driver of selection on flowering time. The genetic and experimental evolution methods exhibited striking agreement overall, but a slight discrepancy between the two suggested that negative environmental covariance between age at flowering and male fitness may have contributed to phenotypic selection. Together, the three methods enriched our understanding of selection on flowering time, from mating opportunity to phenotypic selection to evolutionary response. The novel experimental evolution method may provide a means of examining selection through male fitness when genetic paternity analysis is not possible.
The Bethe Sum Rule and Basis Set Selection in the Calculation of Generalized Oscillator Strengths
Cabrera-Trujillo, Remigio; Sabin, John R.; Oddershede, Jens;
1999-01-01
Fulfillment of the Bethe sum rule may be construed as a measure of basis set quality for atomic and molecular properties involving the generalized oscillator strength distribution. It is first shown that, in the case of a complete basis, the Bethe sum rule is fulfilled exactly in the random phase...
Guerin, G.; Goldberg, D.; Meltser, A.
2001-05-01
One of the recurring challenges in deep-sea drilling has been to maintain a precise control of coring depths, and to limit the influence of surface heave on drill bit motion. Operating in water depths between 1 and 6 km, the Ocean Drilling Program has relied on a drill string heave compensator to collect more than 200 km of cores, with maximum penetration of 2km. To evaluate the efficiency of the heave compensation, a device was developed to measure downhole acceleration from the top of the core barrel. This probe records three-axis acceleration and pressure at up to 100 samples per second. The first deployments of this instrument on ODP Legs 185 and 191 show that the heave compensator limits the bit motion to about 10% of the surface heave. This device could prove most useful to monitor heave compensation on shallow water drilling platforms where heave is a primary concern. The high-resolution downhole acceleration data can also be used to determine some of the mechanical properties of the formation. When deployed on piston cores, a maximum vertical acceleration of up to 3G is recorded as the coring shoe penetrates the formation. This maximum value is characteristic of the sediment strength and its degree of consolidation and can be used to identify formations that are typically difficult to recover, such as hard layers or hydrate-bearing sediments. With rotary coring, downhole acceleration signals decrease in magnitude and frequency content with the increasing hardness of the formation. High amplitude is observed in uncompacted sediments and low amplitudes in low-porosity oceanic basalt. Comparison between acceleration records and geophysical logs show that this relationship can be observed at a dm-scale within an individual core. Easy to deploy and adding almost no time to coring operations, the downhole accelerometer tool offers a way to characterize formations continuously while coring, which is particularly useful in the event of poor core recovery.
Behavioral estimates of human frequency selectivity at low frequencies
Orellana, Carlos Andrés Jurado
A fundamental property of our hearing organ is its ability to break down sound into different spectral components, allowing us to make use of the richness in natural sound phenomena. Auditory filters, which conceptualize this property of the ear, however, have not been appropriately described...... at low sound frequencies. As a consequence of our lack of knowledge, we cannot accurately model our perception of complex low-frequency sound (such as that emitted by wind turbines or industrial processes, which can easily produce annoyance) nor make meaningful predictions of our perception based...... on physical sound measurements. In this PhD thesis a detailed description of frequency selectivity at low frequencies is given. Different experiments have been performed to determine the properties of human auditory filters. Besides, loudness perception of low-frequency sinusoidal signals has been evaluated...
Varela, Aurore; Chouinard, Luc; Lesage, Elisabeth; Guldberg, Robert; Smith, Susan Y; Kostenuik, Paul J; Hattersley, Gary
2017-02-01
Abaloparatide is a novel 34 amino acid peptide selected to be a potent and selective activator of the parathyroid hormone receptor 1 (PTHR1) signaling pathway. The effects of 12months of abaloparatide treatment on bone mass, bone strength and bone quality was assessed in osteopenic ovariectomized (OVX) rats. SD rats were subjected to OVX or sham surgery at 6months of age and left untreated for 3months to allow OVX-induced bone loss. Eighteen OVX rats were sacrificed after this bone depletion period, and the remaining OVX rats received daily s.c. injections of vehicle (n=18) or abaloparatide at 1, 5 or 25μg/kg/d (n=18/dose level) for 12months. Sham controls (n=18) received vehicle daily. Bone changes were assessed by DXA and pQCT after 0, 3, 6 or 12months of treatment, and destructive biomechanical testing was conducted at month 12 to assess bone strength and bone quality. Abaloparatide dose-dependently increased bone mass at the lumbar spine and at the proximal and diaphyseal regions of the tibia and femur. pQCT revealed that increased cortical bone volume at the tibia was a result of periosteal expansion and endocortical bone apposition. Abaloparatide dose-dependently increased structural strength of L4-L5 vertebral bodies, the femur diaphysis, and the femur neck. Increments in peak load for lumbar spine and the femur diaphysis of abaloparatide-treated rats persisted even after adjusting for treatment-related increments in BMC, and estimated material properties were maintained or increased at the femur diaphysis with abaloparatide. The abaloparatide groups also exhibited significant and positive correlations between bone mass and bone strength at these sites. These data indicate that gains in cortical and trabecular bone mass with abaloparatide are accompanied by and correlated with improvements in bone strength, resulting in maintenance or improvement in bone quality. Thus, this study demonstrated that long-term daily administration of abaloparatide to
CHUNG Warn-ill; CHOI Jun-ho; BAE Hae-young
2004-01-01
Many commercial database systems maintain histograms to summarize the contents of relations and permit the efficient estimation of query result sizes and the access plan cost. In spatial database systems, most spatial query predicates are consisted of topological relationships between spatial objects, and it is very important to estimate the selectivity of those predicates for spatial query optimizer. In this paper, we propose a selectivity estimation scheme for spatial topological predicates based on the multidimensional histogram and the transformation scheme. Proposed scheme applies twopartition strategy on transformed object space to generate spatial histogram and estimates the selectivity of topological predicates based on the topological characteristics of the transformed space. Proposed scheme provides a way for estimating the selectivity without too much memory space usage and additional I/Os in most spatial query optimizers.
Warner, D.M.; Kiley, C.S.; Claramunt, R.M.; Clapp, D.F.
2008-01-01
We used growth and diet data from a fishery-independent survey of Chinook salmon Oncorhynchus tshawytscha, acoustic estimates of prey density and biomass, and statistical catch-at-age modeling to study the influence of the year-class strength of alewife Alosa pseudoharengus on the prey selection and abundance of age-1 Chinook salmon in Lake Michigan during the years 1992-1996 and 2001-2005. Alewives age 2 or younger were a large part of age-1 Chinook salmon diets but were not selectively fed upon by age-1 Chinook salmon in most years. Feeding by age-1 Chinook salmon on alewives age 2 or younger became selective as the biomass of alewives in that young age bracket increased, and age-1 Chinook salmon also fed selectively on young bloaters Coregonus hoyi when bloater density was high. Selection of older alewives decreased at high densities of alewives age 2 or younger and, in some cases, high densities of bloater. The weight and condition of age-1 Chinook salmon were not related to age-1 Chinook salmon abundance or prey abundance, but the abundance of age-1 Chinook salmon in year t was positively related to the density of age-0 alewives in year t - 1. Our results suggest that alewife year-class strength exerts a positive bottom-up influence on age-1 Chinook salmon abundance, prey switching behavior by young Chinook salmon contributing to the stability of the predator-prey relationship between Chinook salmon and alewives. ?? Copyright by the American Fisheries Society 2008.
Zhang, Guomin; Sandanayake, Malindu; Setunge, Sujeeva; Li, Chunqing; Fang, Jun
2017-02-01
Emissions from equipment usage and transportation at the construction stage are classified as the direct emissions which include both greenhouse gas (GHG) and non-GHG emissions due to partial combustion of fuel. Unavailability of a reliable and complete inventory restricts an accurate emission evaluation on construction work. The study attempts to review emission factor standards readily available worldwide for estimating emissions from construction equipment. Emission factors published by United States Environmental Protection Agency (US EPA), Australian National Greenhouse Accounts (AUS NGA), Intergovernmental Panel on Climate Change (IPCC) and European Environmental Agency (EEA) are critically reviewed to identify their strengths and weaknesses. A selection process based on the availability and applicability is then developed to help identify the most suitable emission factor standards for estimating emissions from construction equipment in the Australian context. A case study indicates that a fuel based emission factor is more suitable for GHG emission estimation and a time based emission factor is more appropriate for estimation of non-GHG emissions. However, the selection of emission factor standards also depends on factors like the place of analysis (country of origin), data availability and the scope of analysis. Therefore, suitable modifications and assumptions should be incorporated in order to represent these factors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Vast Volatility Matrix Estimation using High Frequency Data for Portfolio Selection
Fan, Jianqing; Yu, Ke
2010-01-01
Portfolio allocation with gross-exposure constraint is an effective method to increase the efficiency and stability of selected portfolios among a vast pool of assets, as demonstrated in Fan et al (2008). The required high-dimensional volatility matrix can be estimated by using high frequency financial data. This enables us to better adapt to the local volatilities and local correlations among vast number of assets and to increase significantly the sample size for estimating the volatility matrix. This paper studies the volatility matrix estimation using high-dimensional high-frequency data from the perspective of portfolio selection. Specifically, we propose the use of "pairwise-refresh time" and "all-refresh time" methods proposed by Barndorff-Nielsen et al (2008) for estimation of vast covariance matrix and compare their merits in the portfolio selection. We also establish the concentration inequalities of the estimates, which guarantee desirable properties of the estimated volatility matrix in vast asset ...
Goldenshluger, Alexander
2010-01-01
We address the problem of density estimation with $\\bL_p$--loss by selection of kernel estimators. We develop a selection procedure and derive corresponiding $\\bL_p$--risk oracle inequalities. It is shown that the proposed selection rule leads to the minimax estimator that is adaptive over a scale of the anisotropic Nikol'ski classes. The main technical tools used in our derivations are uniform bounds on the $\\bL_p$--norms of empirical processes developed recently in Goldenshluger and Lepski~(2010).
Sensor selection for received signal strength-based source localization in wireless sensor networks
无
2011-01-01
Generally, localization is a nonlinear problem, while linearization is used to simplify this problem. Reasonable approximations could be achieved when signal-to-noise ratio (SNR) is large enough. Energy is a critical resource in wireless sensor networks, and system lifetime needs to be prolonged through the use of energy efficient strategies during system operation. In this paper, a closed-form solution for received signal strength (RSS)-based source localization in wireless sensor network (WSN) is obtained...
Lawaf, Shirin; Nasermostofi, Shahbaz; Afradeh, Mahtasadat
2017-01-01
PURPOSE Considering the importance of metal-ceramic bond, the present study aimed to compare the bond strength of ceramics to cobalt-chrome (Co-Cr) alloys made by casting and selective laser melting (SLM). MATERIALS AND METHODS In this in-vitro experimental study, two sample groups were prepared, with one group comprising of 10 Co-Cr metal frameworks fabricated by SLM method and the other of 10 Co-Cr metal frameworks fabricated by lost wax cast method with the dimensions of 0.5 × 3 × 25 mm (following ISO standard 9693). Porcelain with the thickness of 1.1 mm was applied on a 3 × 8-mm central rectangular area of each sample. Afterwards, bond strengths of the samples were assessed with a Universal Testing Machine. Statistical analysis was performed with Kolmogorov-Smirnov test and T-test. RESULTS Bond strength in the conventionally cast group equaled 74.94 ± 16.06 MPa, while in SLM group, it equaled 69.02 ± 5.77 MPa. The difference was not statistically significant (P ≤ .05). CONCLUSION The results indicated that the bond strengths between ceramic and Co-Cr alloys made by casting and SLM methods were not statistically different. PMID:28243392
Stępień Sylwia
2015-06-01
Full Text Available Investigation of geosynthetics behaviour has been carried out for many years. Before using geosynthetics in practice, the standard laboratory tests had been carried out to determine basic mechanical parameters. In order to examine the tensile strength of the sample which extends at a constant strain rate, one should measure the value of the tensile force and strain. Note that geosynthetics work under different conditions of stretching and temperatures, which significantly reduce the strength of these materials. The paper presents results of the tensile test of geotextile at different strain rates and temperatures from 20 °C to 100 °C. The aim of this study was to determine the effect of temperature and strain rate on tensile strength and strain of the woven geotextile. The article presents the method of investigation and the results. The data obtained allowed us to assess the parameters of material which should be considered in the design of the load-bearing structures that work at temperatures up to 100 °C.
Stępień, Sylwia; Szymański, Alojzy
2015-06-01
Investigation of geosynthetics behaviour has been carried out for many years. Before using geosynthetics in practice, the standard laboratory tests had been carried out to determine basic mechanical parameters. In order to examine the tensile strength of the sample which extends at a constant strain rate, one should measure the value of the tensile force and strain. Note that geosynthetics work under different conditions of stretching and temperatures, which significantly reduce the strength of these materials. The paper presents results of the tensile test of geotextile at different strain rates and temperatures from 20 °C to 100 °C. The aim of this study was to determine the effect of temperature and strain rate on tensile strength and strain of the woven geotextile. The article presents the method of investigation and the results. The data obtained allowed us to assess the parameters of material which should be considered in the design of the load-bearing structures that work at temperatures up to 100 °C.
Kanungo, D. P.; Sharma, Shaifaly; Pain, Anindya
2014-09-01
The shear strength parameters of soil (cohesion and angle of internal friction) are quite essential in solving many civil engineering problems. In order to determine these parameters, laboratory tests are used. The main objective of this work is to evaluate the potential of Artificial Neural Network (ANN) and Regression Tree (CART) techniques for the indirect estimation of these parameters. Four different models, considering different combinations of 6 inputs, such as gravel %, sand %, silt %, clay %, dry density, and plasticity index, were investigated to evaluate the degree of their effects on the prediction of shear parameters. A performance evaluation was carried out using Correlation Coefficient and Root Mean Squared Error measures. It was observed that for the prediction of friction angle, the performance of both the techniques is about the same. However, for the prediction of cohesion, the ANN technique performs better than the CART technique. It was further observed that the model considering all of the 6 input soil parameters is the most appropriate model for the prediction of shear parameters. Also, connection weight and bias analyses of the best neural network (i.e., 6/2/2) were attempted using Connection Weight, Garson, and proposed Weight-bias approaches to characterize the influence of input variables on shear strength parameters. It was observed that the Connection Weight Approach provides the best overall methodology for accurately quantifying variable importance, and should be favored over the other approaches examined in this study.
A. A. Galanin
2014-01-01
Full Text Available Residual strength and diameters of lichen Rhizocarpon sp. were measured on different elements of the Late Holocene glacial-cryogenic morphosculpture of frontal parts of glaciers № 29 and 31 in the Suntar-Khayat Range. All in all, 180 lichenometric sites (about 1000 individual measurements of Rhizocarpon sp. and 150 sites for testing the residual strength of rebound value were organized, on which 380 estimations of this parameter (5674 individual measurements were performed. For lichenometric index of age (minimal time of exposure we used statistics RH5 that was a mean value from five maximal individuals on a local site; for index of rebound value Q that was a mean value from 80-100 unit measurements on the same site was taken. Using of data obtained at different times by aerospace surveys made it possible to derive the relationship between the RH5 statistics and time t of the morphosculptures exposure: RH5 = 0.0535t + 0.29. On the basis of regression coefficients of RH5 and Q indexes the equation RH5 = 69209e-0,136Q had been deduced as well as the equation connecting residual strength (rebound value Q and time t of surface exposure: t = (69209e-0,136Qvalue– 0,29/0,0535. Basing on the equations above, age of the moraine belts of the above glaciers was estimated. The most developed moraine belt, now placed apart from the present-day glacier edges at a distance of 600–700 m, was found to be formed for the whole Little Ice Age. Glaciers reached their maximum volumes during its first phase, i.e. during cold period of 13-15th centuries. Area of glacierization exceeded its current size by 35–40%. Glaciers remained at their almost steady state until middle of 19th century and then began to retreat slowly. By middle of 20th century, glaciers shortened by 5–7%. The most intensive shrinking of these glaciers started in the second half of 20th century.
Li, Yan; Chen, Jianjun; Liu, Jipeng; Zhang, Lei; Wang, Weiguo; Zhang, Shaofeng
2013-09-01
The reliability of all-ceramic crowns is of concern to both patients and doctors. This study introduces a new methodology for quantifying the reliability of all-ceramic crowns based on the stress-strength interference theory and finite element models. The variables selected for the reliability analysis include the magnitude of the occlusal contact area, the occlusal load and the residual thermal stress. The calculated reliabilities of crowns under different loading conditions showed that too small occlusal contact areas or too great a difference of the thermal coefficient between veneer and core layer led to high failure possibilities. There results were consistent with many previous reports. Therefore, the methodology is shown to be a valuable method for analyzing the reliabilities of the restorations in the complicated oral environment.
Mahshid, Rasoul; Hansen, Hans Nørgaard; Loft Højbjerre, Klaus
2016-01-01
Additive manufacturing is rapidly developing and gaining popularity for direct metal fabrication systems like selective laser melting (SLM). The technology has shown significant improvement for high-quality fabrication of lightweight design-efficient structures such as conformal cooling channels...
Fang, Qile; Zhou, Xufeng; Deng, Wei; Zheng, Zhi; Liu, Zhaoping
2016-09-01
Graphene oxide (GO) based membranes have been widely applied in molecular separation based on the size exclusion effect of the nanochannels formed by stacked GO sheets. However, it’s still a challenge to prepare a freestanding GO-based membrane with high mechanical strength and structural stability which is prerequisite for separation application in aqueous solution. Here, a freestanding composite membrane based on bacterial cellulose (BC) and GO is designed and prepared. BC network provides a porous skeleton to spread GO sheets and uniformly incorporates into the GO layers, which endows the BC + GO composite membrane with well water-stability, excellent tensile strength, as well as improved toughness, guaranteeing its separation applicability in water environment. The resulting BC + GO membrane exhibits obviously discrepant permeation properties for different inorganic/organic ions with different size, and in particular, it can quickly separate ions in nano-scale from angstrom-scale. Therefore, this novel composite membrane is considered to be a promising candidate in the applications of water purification, food industry, biomedicine, and pharmaceutical and fuel separation.
Fang, Qile; Zhou, Xufeng; Deng, Wei; Zheng, Zhi; Liu, Zhaoping
2016-01-01
Graphene oxide (GO) based membranes have been widely applied in molecular separation based on the size exclusion effect of the nanochannels formed by stacked GO sheets. However, it’s still a challenge to prepare a freestanding GO-based membrane with high mechanical strength and structural stability which is prerequisite for separation application in aqueous solution. Here, a freestanding composite membrane based on bacterial cellulose (BC) and GO is designed and prepared. BC network provides a porous skeleton to spread GO sheets and uniformly incorporates into the GO layers, which endows the BC + GO composite membrane with well water-stability, excellent tensile strength, as well as improved toughness, guaranteeing its separation applicability in water environment. The resulting BC + GO membrane exhibits obviously discrepant permeation properties for different inorganic/organic ions with different size, and in particular, it can quickly separate ions in nano-scale from angstrom-scale. Therefore, this novel composite membrane is considered to be a promising candidate in the applications of water purification, food industry, biomedicine, and pharmaceutical and fuel separation. PMID:27615451
Models to estimate genetic parameters in crossbred dairy cattle populations under selection.
Werf, van der J.H.J.
1990-01-01
Estimates of genetic parameters needed to control breeding programs, have to be regularly updated, due to changing environments and ongoing selection and crossing of populations. Restricted maximum likelihood methods optimally provide these estimates, assuming that the statisticalgenetic model u
Estimating the predictive quality of dose-response after model selection.
Hu, Chuanpu; Dong, Yingwen
2007-07-20
Prediction of dose-response is important in dose selection in drug development. As the true dose-response shape is generally unknown, model selection is frequently used, and predictions based on the final selected model. Correctly assessing the quality of the predictions requires accounting for the uncertainties caused by the model selection process, which has been difficult. Recently, a new approach called data perturbation has emerged. It allows important predictive characteristics be computed while taking model selection into consideration. We study, through simulation, the performance of data perturbation in estimating standard error of parameter estimates and prediction errors. Data perturbation was found to give excellent prediction error estimates, although at times large Monte Carlo sizes were needed to obtain good standard error estimates. Overall, it is a useful tool to characterize uncertainties in dose-response predictions, with the potential of allowing more accurate dose selection in drug development. We also look at the influence of model selection on estimation bias. This leads to insights into candidate model choices that enable good dose-response prediction.
Cormorant catch concerns for fishers: estimating the size-selectivity of a piscivorous bird.
Vladimir Troynikov
Full Text Available Conflict arises in fisheries worldwide when piscivorous birds target fish species of commercial value. This paper presents a method for estimating size selectivity functions for piscivores and uses it to compare predation selectivities of Great Cormorants (Phalacrocorax carbo sinensis L. 1758 with that of gill-net fishing on a European perch (Perca fluviatilis L. 1758 population in the Curonian Lagoon, Lithuania. Fishers often regard cormorants as an unwanted "satellite species", but the degree of direct competition and overlap in size-specific selectivity between fishers and cormorants is unknown. This study showed negligible overlap in selectivity between Great Cormorants and legal-sized commercial nets. The selectivity estimation method has general application potential for use in conjunction with population dynamics models to assess fish population responses to size-selective fishing from a wide range of piscivorous predators.
Digital Image Watermarking With Random Selection of Watermark Insertion Having Adaptive Strength
G.S. Kalra
2014-02-01
Full Text Available We have presented an algorithm of digital image watermarking for gray scale images which we implemented in frequency domain. Before inserting the watermark, we added the Hamming codes row wise as well column wise. Two encryption techniques were implemented on the ECC inserted watermark for its security. The pixel position for inserting the watermark was calculated using starting row and column number for that 8×8 block. Pixel embedding strength is calculated using criteria that low frequency is robust in general signal processing attacks, thus choosing less value to be embedded and vice-versa. Results show that the watermarking algorithm is robust against common signal processing attacks. The algorithm is tested against multiple attacks also.
Sensor selection for parameterized random field estimation in wireless sensor networks
无
2011-01-01
We consider the random field estimation problem with parametric trend in wireless sensor networks where the field can be described by unknown parameters to be estimated. Due to the limited resources, the network selects only a subset of the sensors to perform the estimation task with a desired performance under the D-optimal criterion. We propose a greedy sampling scheme to select the sensor nodes according to the information gain of the sensors. A distributed algorithm is also developed by consensus-based ...
Yin, T; Tyas, A; PLEKHOV, O.; Terekhina, A.; L. Susmel
2014-01-01
In this paper the so-called Theory of Critical Distances is reformulated to make it suitable for estimating the strength of notched metals subjected to dynamic loading. The TCD takes as its starting point the assumption that engineering materials’ strength can accurately be predicted by directly post-processing the entire linear-elastic stress field acting on the material in the vicinity of the stress concentrator being assessed. In order to extend the used of the TCD to situation...
Bijma, P.; Muir, W.M.; Ellen, E.D.; Wolf, J.B.; Arendonk, van J.A.M.
2007-01-01
Interactions among individuals are universal, both in animals and in plants and in natural as well as domestic populations. Understanding the consequences of these interactions for the evolution of populations by either natural or artificial selection requires knowledge of the heritable components u
Bijma, P.; Muir, W.M.; Ellen, E.D.; Wolf, J.B.; Arendonk, van J.A.M.
2007-01-01
Interactions among individuals are universal, both in animals and in plants and in natural as well as domestic populations. Understanding the consequences of these interactions for the evolution of populations by either natural or artificial selection requires knowledge of the heritable components
Bachmann, Katherine Neubecker; Fazeli, Pouneh K; Lawson, Elizabeth A; Russell, Brian M; Riccio, Ariana D; Meenaghan, Erinne; Gerweck, Anu V; Eddy, Kamryn; Holmes, Tara; Goldstein, Mark; Weigel, Thomas; Ebrahimi, Seda; Mickley, Diane; Gleysteen, Suzanne; Bredella, Miriam A; Klibanski, Anne; Miller, Karen K
2014-12-01
Data suggest that anorexia nervosa (AN) and obesity are complicated by elevated fracture risk, but skeletal site-specific data are lacking. Traditional bone mineral density (BMD) measurements are unsatisfactory at both weight extremes. Hip structural analysis (HSA) uses dual-energy X-ray absorptiometry data to estimate hip geometry and femoral strength. Factor of risk (φ) is the ratio of force applied to the hip from a fall with respect to femoral strength; higher values indicate higher hip fracture risk. The objective of the study was to investigate hip fracture risk in AN and overweight/obese women. This was a cross-sectional study. The study was conducted at a Clinical Research Center. PATIENTS included 368 women (aged 19-45 y): 246 AN, 53 overweight/obese, and 69 lean controls. HSA-derived femoral geometry, peak factor of risk for hip fracture, and factor of risk for hip fracture attenuated by trochanteric soft tissue (φ(attenuated)) were measured. Most HSA-derived parameters were impaired in AN and superior in obese/overweight women vs controls at the narrow neck, intertrochanteric, and femoral shaft (P ≤ .03). The φ(attenuated) was highest in AN and lowest in overweight/obese women (P fractures. Femoral geometry by HSA, hip BMD, and factor of risk for hip fracture attenuated by soft tissue are impaired in AN and superior in obesity, suggesting higher and lower hip fracture risk, respectively. Only attenuated factor of risk was associated with fragility fracture prevalence, suggesting that variability in soft tissue padding may help explain site-specific fracture risk not captured by BMD.
A field test of the extent of bias in selection estimates after accounting for emigration
Letcher, B.H.; Horton, G.E.; Dubreuil, T.L.; O'Donnell, M. J.
2005-01-01
Question: To what extent does trait-dependent emigration bias selection estimates in a natural system? Organisms: Two freshwater cohorts of Atlantic salmon (Salmo salar) juveniles. Field site: A 1 km stretch of a small stream (West Brook) in western Massachusetts. USA from which emigration could be detected continuously. Methods: Estimated viability selection differentials for body size either including or ignoring emigration (include = emigrants survived interval, ignore = emigrants did not survive interval) for 12 intervals. Results: Seasonally variable size-related emigration from our study site generated variable levels of bias in selection estimates for body size. The magnitude of this bias was closely related with the extent of size-dependent emigration during each interval. Including or ignoring the effects of emigration changed the significance of selection estimates in 5 of the 12 intervals, and changed the estimated direction of selection in 4 of the 12 intervals. These results indicate the extent to which inferences about selection in a natural system can be biased by failing to account for trait-dependent emigration. ?? 2005 Benjamin H. Letcher.
Muscle activation during selected strength exercises in women with chronic neck muscle pain
Andersen, L.L.; Kjaer, M.; Andersen, C.H.
2008-01-01
) (luring selected strengthening exercises in women undergoing rehabilitation for chronic neck muscle pain (defined as a clinical diagnosis of trapezins myalgia). Subjects. The subjects were 12 female workers (age = 30-60 years) with a clinical diagnosis of trapezius myalgia and a mean baseline pain...
Muscle activation during selected strength exercises in women with chronic neck muscle pain
Andersen, Lars L; Kjaer, Michael; Andersen, Christoffer H
2008-01-01
selected strengthening exercises in women undergoing rehabilitation for chronic neck muscle pain (defined as a clinical diagnosis of trapezius myalgia). SUBJECTS: The subjects were 12 female workers (age=30-60 years) with a clinical diagnosis of trapezius myalgia and a mean baseline pain intensity of 5...
ESTIMATION OF BEARING STRENGTH OF BRACES AND STRUTS OF FARMS OF COVERAGE OF 6D TYPE HOTHOUSES
Degtyarev G. V.
2015-03-01
Full Text Available The method of estimation of bearing strength of braces and struts of farms of coverage of hothouses is presented in the article. The deep analysis of the question of bearing strength appeared in the light of mass erection of hothouses, especially in the South Federal district, the construction of which had been bought in the countries of Near East. However, simple transfer of the constructions of hothouses made in foreign countries can not be considered as rational on the territory of the Russian Federation. The constructions of hothouses in most do not maintain exploitation even in one winter, when the considerable snow loadings are, and wind as well. The necessity of bringing of clarity for the folded situation became more obvious. Conducted successive static, dynamic and seismic analyses, executed upon the normative documents and due to the norms of supplier operating on the territory of the Russian Federation, in attachment to the real sections of bearings structural elements, allowed to expose the stated below percents of the use of the examined elements of constructions. Supporting braces of farms of coverage: on the norms of the Russian Federation, on the first maximum state percent of the use – 999 %; there is a percent of the use on the second maximum state – 999 %; on the norms of the Russian Federation taking into account loadings of supplier, on the first maximum state percent of the use – 999 %; there is a percent of the use on the second maximum state – 999 %; Stretched braces of farms of coverage: on the norms of the Russian Federation, on the first maximum state percent of the use – 64,2%; there is a percent of the use on the second maximum state – 721,8 %; on the norms of the Russian Federation taking into account loadings of supplier, on the first maximum state percent of the use – 25,8 %; there is a percent of the use on the second maximum state – 721,8%. Analysis presented allows establishing that at the load
Jacobson, Bert H; Conchola, Eric C; Smith, Doug B; Akehi, Kazuma; Glass, Rob G
2016-08-01
Jacobson, BH, Conchola, EC, Smith, DB, Akehi, K, and Glass, RG. Relationship between selected strength and power assessments to peak and average velocity of the drive block in offensive line play. J Strength Cond Res 30(8): 2202-2205, 2016-Typical strength training for football includes the squat and power clean (PC) and routinely measured variables include 1 repetition maximum (1RM) squat and 1RM PC along with the vertical jump (VJ) for power. However, little research exists regarding the association between the strength exercises and velocity of an actual on-the-field performance. The purpose of this study was to investigate the relationship of peak velocity (PV) and average velocity (AV) of the offensive line drive block to 1RM squat, 1RM PC, the VJ, body mass (BM), and body composition. One repetition maximum assessments for the squat and PC were recorded along with VJ height, BM, and percent body fat. These data were correlated with PV and AV while performing the drive block. Peal velocity and AV were assessed using a Tendo Power and Speed Analyzer as the linemen fired, from a 3-point stance into a stationary blocking dummy. Pearson product analysis yielded significant (p ≤ 0.05) correlations between PV and AV and the VJ, the squat, and the PC. A significant inverse association was found for both PV and AV and body fat. These data help to confirm that the typical exercises recommended for American football linemen is positively associated with both PV and AV needed for the drive block effectiveness. It is recommended that these exercises remain the focus of a weight room protocol and that ancillary exercises be built around these exercises. Additionally, efforts to reduce body fat are recommended.
Schrago, Carlos G
2014-08-01
Reliable estimates of ancestral effective population sizes are necessary to unveil the population-level phenomena that shaped the phylogeny and molecular evolution of the African great apes. Although several methods have previously been applied to infer ancestral effective population sizes, an analysis of the influence of the selective regime on the estimates of ancestral demography has not been thoroughly conducted. In this study, three independent data sets under different selective regimes were used were composed to tackle this issue. The results showed that selection had a significant impact on the estimates of ancestral effective population sizes of the African great apes. The inference of the ancestral demography of African great apes was affected by the selection regime. The effects, however, were not homogeneous along the ancestral populations of great apes. The effective population size of the ancestor of humans and chimpanzees was more impacted by the selection regime when compared to the same parameter in the ancestor of humans, chimpanzees and gorillas. Because the selection regime influenced the estimates of ancestral effective population size, it is reasonable to assume that a portion of the discrepancy found in previous studies that inferred the ancestral effective population size may be attributable to the differential action of selection on the genes sampled.
General model selection estimation of a periodic regression with a Gaussian noise
Konev, Victor; 10.1007/s10463-008-0193-1
2010-01-01
This paper considers the problem of estimating a periodic function in a continuous time regression model with an additive stationary gaussian noise having unknown correlation function. A general model selection procedure on the basis of arbitrary projective estimates, which does not need the knowledge of the noise correlation function, is proposed. A non-asymptotic upper bound for quadratic risk (oracle inequality) has been derived under mild conditions on the noise. For the Ornstein-Uhlenbeck noise the risk upper bound is shown to be uniform in the nuisance parameter. In the case of gaussian white noise the constructed procedure has some advantages as compared with the procedure based on the least squares estimates (LSE). The asymptotic minimaxity of the estimates has been proved. The proposed model selection scheme is extended also to the estimation problem based on the discrete data applicably to the situation when high frequency sampling can not be provided.
Marker-assisted selection can reduce true as well as pedigree-estimated inbreeding.
Pedersen, L D; Sørensen, A C; Berg, P
2009-05-01
This study investigated whether selection using genotype information reduced the rate and level of true inbreeding, that is, identity by descent, at a selectively neutral locus as well as a locus under selection compared with traditional BLUP selection. In addition, the founder representation at these loci and the within-family selection at the nonneutral locus were studied. The study was carried out using stochastic simulation of a population resembling the breeding nucleus of a dairy cattle population for 25 yr. Each year, 10 proven bulls were selected across herds along with 100 dams from within each of 40 herds. Selection was performed using BLUP, marker-assisted, or gene-assisted selection for a trait with low heritability (h2 = 0.04) only expressed in females, mimicking a health trait. The simulated genome consisted of 2 chromosomes. One biallelic quantitative trait loci (QTL) with an initial frequency of the favorable allele of 0.1, and initially explaining 25% of the genetic variance as well as 4 markers were simulated in linkage disequilibrium, all positioned at chromosome 1. Chromosome 2 was selectively neutral, and consisted of a single neutral locus. The results showed that in addition to reducing pedigree-estimated inbreeding, the incorporation of genotype information in the selection criteria also reduced the level and rate of true inbreeding. In general, true inbreeding in the QTL was greater than pedigree-estimated inbreeding with respect to both the level and rate of inbreeding, as expected. Also as expected, true and pedigree-estimated inbreeding in the neutral locus were the same. Furthermore, after 25 yr, or approximately 5 generations, the pedigree-estimated level of inbreeding was reduced by 11 and 24% compared with BLUP in gene- and marker-assisted selection, respectively, and the level of true inbreeding in the QTL was reduced by 22 and 13%, respectively. The difference between selection scenarios was found to be caused by a larger number of
Effects of ionic strength, temperature, and pH on degradation of selected antibiotics
Loftin, K.A.; Adams, C.D.; Meyer, M.T.; Surampalli, R.
2008-01-01
Aqueous degradation rates, which include hydrolysis and epimerization, for chlorretracycline (CTC), oxytetracycline (OTC), tetracycline (TET), lincomycin (LNC), sulfachlorpyridazine (SCP), sulfadimethoxine (SDM), sulfathiazole (STZ), trimethoprim (TRM), and tylosin A (TYL) were studied as a function of ionic strength (0.0015, 0.050, or 0.084 mg/L as Na2HPO4), temperature (7, 22, and 35??C), and pH (2, 5, 7, 9, and 11). Multiple linear regression revealed that ionic strength did not significantly affect (?? = 0.05) degradation rates for all compounds, but temperature and pH affected rates for CTC, OTC, and TET significandy (?? = 0.05). Degradation also was observed for TYL at pH 2 and 11. No significant degradation was observed for LNC, SCR SDM, STZ, TRM, and TYL (pH 5, 7, and 9) under study conditions. Pseudo first-order rate constants, half-lives, and Arrhenius coefficients were calculated where appropriate. In general, hydrolysis rates for CTC, OTC, and TET increased as pH and temperature increased following Arrhenius relationships. Known degradation products were used to confirm that degradation had occurred, but these products were not quantified. Half-lives ranged from less than 6 h up to 9.7 wk for the tetracyclines and for TYL (pH 2 and 11), but no degradation of LIN, the sulfonamides, or TRM was observed during the study period. These results indicate that tetracyclines and TYL at pH 2 and 11 are prone to pH-mediated transformation and hydrolysis in some cases, but not the sulfonamides, LIN nor TRM are inclined to degrade under study conditions. This indicates that with the exception of CTC OTC, and TET, pH-mediated reactions such as hydrolysis and epimerization are not likely removal mechanisms in surface water, anaerobic swine lagoons, wastewater, and ground water. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.
Coutts, A; Reaburn, P; Piva, T J; Murphy, A
2007-02-01
The purpose of this study was to examine the influence of overreaching on muscle strength, power, endurance and selected biochemical responses in rugby league players. Seven semi-professional rugby league players (.VO(2max) = 56.1 +/- 1.7 mL . kg (-1) . min (-1); age = 25.7 +/- 2.6 yr; BMI = 27.6 +/- 2.0) completed 6 weeks of progressive overload training with limited recovery periods. A short 7-day stepwise reduction taper immediately followed the overload period. Measures of muscular strength, power and endurance and selected biochemical parameters were taken before and after overload training and taper. Multistage fitness test running performance was significantly reduced (12.3 %) following the overload period. Although most other performance measures tended to decrease following the overload period, only peak hamstring torque at 1.05 rad . s (-1) was significantly reduced (p taper, a significant increase in peak hamstring torque and isokinetic work at both slow (1.05 rad . s (-1)) and fast (5.25 rad . s (-1)) movement velocities were observed. Minimum clinically important performance decreases were measured in a multistage fitness test, vertical jump, 3-RM squat and 3-RM bench press and chin-up (max) following the overload period. Following the taper, minimum clinically important increases in the multistage fitness test, vertical jump, 3-RM squat and 3-RM bench press and chin-up (max) and 10-m sprint performance were observed. Compared to resting measures, the plasma testosterone to cortisol ratio, plasma glutamate, plasma glutamine to glutamate ratio and plasma creatine kinase activity demonstrated significant changes at the end of the overload training period (p < 0.05). These results suggest that muscular strength, power and endurance were reduced following the overload training, indicating a state of overreaching. The most likely explanation for the decreased performance is increased muscle damage via a decrease in the anabolic-catabolic balance.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation
Atterton, Thomas; De Groote, Isabelle; Eliopoulos, Constantine
2016-10-01
The construction of the biological profile from human skeletal remains is the foundation of anthropological examination. However, remains may be fragmentary and the elements usually employed, such as the pelvis and skull, are not available. The clavicle has been successfully used for sex estimation in samples from Iran and Greece. In the present study, the aim was to test the suitability of the measurements used in those previous studies on a British Medieval population. In addition, the project tested whether discrimination between sexes was due to size or clavicular strength. The sample consisted of 23 females and 25 males of pre-determined sex from two medieval collections: Poulton and Gloucester. Six measurements were taken using an osteometric board, sliding calipers and graduated tape. In addition, putty rings and bi-planar radiographs were made and robusticity measures calculated. The resulting variables were used in stepwise discriminant analyses. The linear measurements allowed correct sex classification in 89.6% of all individuals. This demonstrates the applicability of the clavicle for sex estimation in British populations. The most powerful discriminant factor was maximum clavicular length and the best combination of factors was maximum clavicular length and circumference. This result is similar to that obtained by other studies. To further investigate the extent of sexual dimorphism of the clavicle, the biomechanical properties of the polar second moment of area J and the ratio of maximum to minimum bending rigidity are included in the analysis. These were found to have little influence when entered into the discriminant function analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.
Genetic parameters estimates and visual selection for leaves production in Ilex paraguariensis
José Alfredo Sturion
2017-08-01
Full Text Available ABSTRACT The selection of yerba mate superior genotypes, based on each plant leaf weight, is based on genetic parameters obtained from experimental plantations and it is practically impossible from the fourth year of age. Therefore, we estimate genetic parameters and check the feasibility of selection through notes and weight estimates of each tree at 18.5 years of age. The genetic material consists of a combined trial of provenances and progenies of half-sibs with 140 progenies from 7 provenances, installed in Ivaí, Paraná, Brazil, in a randomized block design with 10 repetitions. The genetic control of leaf production is of low magnitude (ĥ2a= 0.175042 ± 0.0393 revealing high influence of the environment. The additive genetic correlations between the real weight of leaves × notes and real weight of leaves × visually estimated weight were of high magnitude (higher than 88%. Thus, the selection based on the leaves weight can be carried out without major losses in genetic gains by both methodologies when the purpose is the sexual selection, in which case the sort order has no importance. In the case of vegetative propagation, aiming clonal plantations, in which only the plants with the highest genotypic values should be selected, the selection by means of notes and leaves weight estimates proved to be inefficient.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Simultaneous estimation and variable selection in median regression using Lasso-type penalty.
Xu, Jinfeng; Ying, Zhiliang
2010-06-01
We consider the median regression with a LASSO-type penalty term for variable selection. With the fixed number of variables in regression model, a two-stage method is proposed for simultaneous estimation and variable selection where the degree of penalty is adaptively chosen. A Bayesian information criterion type approach is proposed and used to obtain a data-driven procedure which is proved to automatically select asymptotically optimal tuning parameters. It is shown that the resultant estimator achieves the so-called oracle property. The combination of the median regression and LASSO penalty is computationally easy to implement via the standard linear programming. A random perturbation scheme can be made use of to get simple estimator of the standard error. Simulation studies are conducted to assess the finite-sample performance of the proposed method. We illustrate the methodology with a real example.
Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects
Guangjie Li
2015-07-01
Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.
Ma, Yanyuan
2013-09-01
We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.
Estimation Method of Path-Selecting Proportion for Urban Rail Transit Based on AFC Data
Feng Zhou
2015-01-01
Full Text Available With the successful application of automatic fare collection (AFC system in urban rail transit (URT, the information of passengers’ travel time is recorded, which provides the possibility to analyze passengers’ path-selecting by AFC data. In this paper, the distribution characteristics of the components of travel time were analyzed, and an estimation method of path-selecting proportion was proposed. This method made use of single path ODs’ travel time data from AFC system to estimate the distribution parameters of the components of travel time, mainly including entry walking time (ewt, exit walking time (exwt, and transfer walking time (twt. Then, for multipath ODs, the distribution of each path’s travel time could be calculated under the condition of its components’ distributions known. After that, each path’s path-selecting proportion can be estimated. Finally, simulation experiments were designed to verify the estimation method, and the results show that the error rate is less than 2%. Compared with the traditional models of flow assignment, the estimation method can reduce the cost of artificial survey significantly and provide a new way to calculate the path-selecting proportion for URT.
Kai, Bo; Li, Runze; Zou, Hui
2011-02-01
The complexity of semiparametric models poses new challenges to statistical inference and model selection that frequently arise from real applications. In this work, we propose new estimation and variable selection procedures for the semiparametric varying-coefficient partially linear model. We first study quantile regression estimates for the nonparametric varying-coefficient functions and the parametric regression coefficients. To achieve nice efficiency properties, we further develop a semiparametric composite quantile regression procedure. We establish the asymptotic normality of proposed estimators for both the parametric and nonparametric parts and show that the estimators achieve the best convergence rate. Moreover, we show that the proposed method is much more efficient than the least-squares-based method for many non-normal errors and that it only loses a small amount of efficiency for normal errors. In addition, it is shown that the loss in efficiency is at most 11.1% for estimating varying coefficient functions and is no greater than 13.6% for estimating parametric components. To achieve sparsity with high-dimensional covariates, we propose adaptive penalization methods for variable selection in the semiparametric varying-coefficient partially linear model and prove that the methods possess the oracle property. Extensive Monte Carlo simulation studies are conducted to examine the finite-sample performance of the proposed procedures. Finally, we apply the new methods to analyze the plasma beta-carotene level data.
Vast Volatility Matrix Estimation using High Frequency Data for Portfolio Selection.
Fan, Jianqing; Li, Yingying; Yu, Ke
2012-01-01
Portfolio allocation with gross-exposure constraint is an effective method to increase the efficiency and stability of portfolios selection among a vast pool of assets, as demonstrated in Fan et al. (2011). The required high-dimensional volatility matrix can be estimated by using high frequency financial data. This enables us to better adapt to the local volatilities and local correlations among vast number of assets and to increase significantly the sample size for estimating the volatility matrix. This paper studies the volatility matrix estimation using high-dimensional high-frequency data from the perspective of portfolio selection. Specifically, we propose the use of "pairwise-refresh time" and "all-refresh time" methods based on the concept of "refresh time" proposed by Barndorff-Nielsen et al. (2008) for estimation of vast covariance matrix and compare their merits in the portfolio selection. We establish the concentration inequalities of the estimates, which guarantee desirable properties of the estimated volatility matrix in vast asset allocation with gross exposure constraints. Extensive numerical studies are made via carefully designed simulations. Comparing with the methods based on low frequency daily data, our methods can capture the most recent trend of the time varying volatility and correlation, hence provide more accurate guidance for the portfolio allocation in the next time period. The advantage of using high-frequency data is significant in our simulation and empirical studies, which consist of 50 simulated assets and 30 constituent stocks of Dow Jones Industrial Average index.
Vast Volatility Matrix Estimation using High Frequency Data for Portfolio Selection*
Fan, Jianqing; Li, Yingying; Yu, Ke
2012-01-01
Portfolio allocation with gross-exposure constraint is an effective method to increase the efficiency and stability of portfolios selection among a vast pool of assets, as demonstrated in Fan et al. (2011). The required high-dimensional volatility matrix can be estimated by using high frequency financial data. This enables us to better adapt to the local volatilities and local correlations among vast number of assets and to increase significantly the sample size for estimating the volatility matrix. This paper studies the volatility matrix estimation using high-dimensional high-frequency data from the perspective of portfolio selection. Specifically, we propose the use of “pairwise-refresh time” and “all-refresh time” methods based on the concept of “refresh time” proposed by Barndorff-Nielsen et al. (2008) for estimation of vast covariance matrix and compare their merits in the portfolio selection. We establish the concentration inequalities of the estimates, which guarantee desirable properties of the estimated volatility matrix in vast asset allocation with gross exposure constraints. Extensive numerical studies are made via carefully designed simulations. Comparing with the methods based on low frequency daily data, our methods can capture the most recent trend of the time varying volatility and correlation, hence provide more accurate guidance for the portfolio allocation in the next time period. The advantage of using high-frequency data is significant in our simulation and empirical studies, which consist of 50 simulated assets and 30 constituent stocks of Dow Jones Industrial Average index. PMID:23264708
Zapater-Pereyra, M; van Dien, F; van Bruggen, J J A; Lens, P N L
2013-01-01
A constructed wetroof (CWR) is defined in this study as the combination of a green roof and a constructed wetland: a shallow wastewater treatment system placed on the roof of a building. The foremost challenge of such CWRs, and the main aim of this investigation, is the selection of an appropriate matrix capable of assuring the required hydraulic retention time, the long-term stability and the roof load-bearing capacity. Six substrata were subjected to water dynamics and destructive tests in two testing-tables. Among all the materials tested, the substratum configuration composed of sand, light expanded clay aggregates, biodegradable polylactic acid beads together with stabilization plates and a turf mat is capable of retaining the water for approximately 3.8 days and of providing stability (stabilization plates) and an immediate protection (turf mat) to the system. Based on those results, a full-scale CWR was built, which did not show any physical deterioration after 1 year of operation. Preliminary wastewater treatment results on the full-scale CWR suggest that it can highly remove main wastewater pollutants (e.g. chemical oxygen demand, PO4(3-)-P and NH4(+)-N). The results of these tests and practical design considerations of the CWR are discussed in this paper.
Sample size estimation and sampling techniques for selecting a representative sample
Aamir Omair
2014-01-01
Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect ...
Dünki, Rudolf M.
2000-11-01
Limited predictability is one of the remarkable features of deterministic chaos and this feature may be quantized in terms of Lyapunov exponents. Accordingly, Lyapunov-exponent estimates may be expected to follow in a natural way from forecast algorithms. Exploring this idea, we propose a method estimating the largest Lyapunov exponent from a time series which uses the behavior of so-called simplex forecasts. The method considers the estimation of properties of the distribution of local simplex expansion coefficients. These are also used for the definition of error bars for the Lyapunov-exponent estimates and allows for selective forecasts with improved prediction accuracy. We demonstrate these concepts on standard test examples and three realistic applications to time series concerning largest Lyapunov-exponent estimation of an experimentally obtained hyperchaotic NMR signal, brain state differentiation, and stock-market prediction.
Estimates of population change in selected species of tropical birds using mark-recapture data
Brawn, J.; Nichols, J.D.; Hines, J.E.; Nesbitt, J.
2000-01-01
The population biology of tropical birds is known for a only small sample of species; especially in the Neotropics. Robust estimates of parameters such as survival rate and finite rate of population change (A) are crucial for conservation purposes and useful for studies of avian life histories. We used methods developed by Pradel (1996, Biometrics 52:703-709) to estimate A for 10 species of tropical forest lowland birds using data from a long-term (> 20 yr) banding study in Panama. These species constitute a ecologically and phylogenetically diverse sample. We present these estimates and explore if they are consistent with what we know from selected studies of banded birds and from 5 yr of estimating nesting success (i.e., an important component of A). A major goal of these analyses is to assess if the mark-recapture methods generate reliable and reasonably precise estimates of population change than traditional methods that require more sampling effort.
Effects of self-selected music on strength, explosiveness, and mood.
Biagini, Matthew S; Brown, Lee E; Coburn, Jared W; Judelson, Daniel A; Statler, Traci A; Bottaro, Martim; Tran, Tai T; Longo, Nick A
2012-07-01
There has been much investigation into the use of music as an ergogenic aid to facilitate physical performance. However, previous studies have primarily focused on predetermined music and aerobic exercise. The purpose of this study was to investigate the effects of self-selected music (SSM) vs. those of no music (NM) on the mood and performance of the athletes performing bench press and squat jump. Twenty resistance trained collegiate men completed 2 experimental conditions, one while listening to SSM and the other with NM. The subjects reported their profile of mood states (POMS) and rating of perceived exertion (RPE) before and after performing 3 sets to failure of the bench press at 75% 1 repetition maximum (1RM) and 3 reps of the squat jump at 30% 1RM. Statistical analyses revealed no differences in squat jump height or relative ground reaction force, but the takeoff velocity (SSM-2.06 ± 0.17 m·s(-1); NM-1.99 ± 0.18 m·s(-1)), rate of velocity development (SSM-5.92 ± 1.46 m·s(-2); NM-5.63 ± 1.70 m·s(-2)), and rate of force development (SSM-3175.61 ± 1792.37 N·s(-1); NM-2519.12 ± 1470.32 N·s(-1)) were greater with SSM, whereas RPE (SSM-5.71 ± 1.37; NM-6.36 ± 1.61) was greater with NM. Bench press reps to failure and RPE were not different between conditions. The POMS scores of vigor (SSM-20.15 ± 5.58; NM-17.45 ± 5.84), tension (SSM-8.40 ± 3.99; NM-6.07 ± 3.26), and fatigue (SSM-8.65 ± 4.49; NM-7.40 ± 4.38) were greater with SSM. This study demonstrated increased performance during an explosive exercise and an altered mood state when listening to SSM. Therefore, listening to SSM might be beneficial for acute power performance.
Use of phytoplankton pigments in estimating food selection of three marine copepods
Oechsler-Christensen, B.; Jonasdottir, Sigrun; Henriksen, P.
2012-01-01
. Traditional grazing experiments were carried out in parallel with pigment analysis in experiments where the copepod A. tonsa was exposed to a mixture of food organisms. The results demonstrated that the two methods gave similar results with regard to food selection and that with certain precautions, pigment......Experiments were carried out to test the use of algal pigments in zooplankton grazing studies with a special emphasis on estimation of food selection. The results demonstrated that pigment composition of the phytoplankton food was reflected closely in the three copepod species Centropages typicus...... analysis can be successfully used in food selection studies...
Anja C Hörger
Full Text Available Coevolution between hosts and pathogens is thought to occur between interacting molecules of both species. This results in the maintenance of genetic diversity at pathogen antigens (or so-called effectors and host resistance genes such as the major histocompatibility complex (MHC in mammals or resistance (R genes in plants. In plant-pathogen interactions, the current paradigm posits that a specific defense response is activated upon recognition of pathogen effectors via interaction with their corresponding R proteins. According to the "Guard-Hypothesis," R proteins (the "guards" can sense modification of target molecules in the host (the "guardees" by pathogen effectors and subsequently trigger the defense response. Multiple studies have reported high genetic diversity at R genes maintained by balancing selection. In contrast, little is known about the evolutionary mechanisms shaping the guardee, which may be subject to contrasting evolutionary forces. Here we show that the evolution of the guardee RCR3 is characterized by gene duplication, frequent gene conversion, and balancing selection in the wild tomato species Solanum peruvianum. Investigating the functional characteristics of 54 natural variants through in vitro and in planta assays, we detected differences in recognition of the pathogen effector through interaction with the guardee, as well as substantial variation in the strength of the defense response. This variation is maintained by balancing selection at each copy of the RCR3 gene. Our analyses pinpoint three amino acid polymorphisms with key functional consequences for the coevolution between the guardee (RCR3 and its guard (Cf-2. We conclude that, in addition to coevolution at the "guardee-effector" interface for pathogen recognition, natural selection acts on the "guard-guardee" interface. Guardee evolution may be governed by a counterbalance between improved activation in the presence and prevention of auto-immune responses in
Hörger, Anja C; Ilyas, Muhammad; Stephan, Wolfgang; Tellier, Aurélien; van der Hoorn, Renier A L; Rose, Laura E
2012-01-01
Coevolution between hosts and pathogens is thought to occur between interacting molecules of both species. This results in the maintenance of genetic diversity at pathogen antigens (or so-called effectors) and host resistance genes such as the major histocompatibility complex (MHC) in mammals or resistance (R) genes in plants. In plant-pathogen interactions, the current paradigm posits that a specific defense response is activated upon recognition of pathogen effectors via interaction with their corresponding R proteins. According to the "Guard-Hypothesis," R proteins (the "guards") can sense modification of target molecules in the host (the "guardees") by pathogen effectors and subsequently trigger the defense response. Multiple studies have reported high genetic diversity at R genes maintained by balancing selection. In contrast, little is known about the evolutionary mechanisms shaping the guardee, which may be subject to contrasting evolutionary forces. Here we show that the evolution of the guardee RCR3 is characterized by gene duplication, frequent gene conversion, and balancing selection in the wild tomato species Solanum peruvianum. Investigating the functional characteristics of 54 natural variants through in vitro and in planta assays, we detected differences in recognition of the pathogen effector through interaction with the guardee, as well as substantial variation in the strength of the defense response. This variation is maintained by balancing selection at each copy of the RCR3 gene. Our analyses pinpoint three amino acid polymorphisms with key functional consequences for the coevolution between the guardee (RCR3) and its guard (Cf-2). We conclude that, in addition to coevolution at the "guardee-effector" interface for pathogen recognition, natural selection acts on the "guard-guardee" interface. Guardee evolution may be governed by a counterbalance between improved activation in the presence and prevention of auto-immune responses in the absence of
Ding Wenrui; Fei Li; Gao Qiang; Liu Shuo
2013-01-01
In this paper,we consider an amplify-and-forward (AF) cooperative communication system when the channel state information (CSI) used in relay selection differs from that during data transmission,i.e.,the CSI used in relay selection is outdated.The selected relay may not be actually the best for data transmission and the outage performance of the cooperative system will deteriorate.To improve its performance,we propose a relay selection strategy based on maximum a posteriori (MAP) estimation,where relay is selected based on predicted signal-to-noise ratio (SNR).To reduce the computation complexity,we approximate the a posteriori probability density of SNR and obtain a closed-form predicted SNR,and a relay selection strategy based on the approximate MAP estimation (RS-AMAP) is proposed.The simulation results show that this approximation leads to trivial performance loss from the perspective of outage probability.Compared with relay selection strategies given in the literature,the outage probability is reduced largely through RS-AMAP for medium-to-large transmitting powers and medium-to-high channel correlation coefficients.
Optimal experiment selection for parameter estimation in biological differential equation models
Transtrum Mark K
2012-07-01
Full Text Available Abstract Background Parameter estimation in biological models is a common yet challenging problem. In this work we explore the problem for gene regulatory networks modeled by differential equations with unknown parameters, such as decay rates, reaction rates, Michaelis-Menten constants, and Hill coefficients. We explore the question to what extent parameters can be efficiently estimated by appropriate experimental selection. Results A minimization formulation is used to find the parameter values that best fit the experiment data. When the data is insufficient, the minimization problem often has many local minima that fit the data reasonably well. We show that selecting a new experiment based on the local Fisher Information of one local minimum generates additional data that allows one to successfully discriminate among the many local minima. The parameters can be estimated to high accuracy by iteratively performing minimization and experiment selection. We show that the experiment choices are roughly independent of which local minima is used to calculate the local Fisher Information. Conclusions We show that by an appropriate choice of experiments, one can, in principle, efficiently and accurately estimate all the parameters of gene regulatory network. In addition, we demonstrate that appropriate experiment selection can also allow one to restrict model predictions without constraining the parameters using many fewer experiments. We suggest that predicting model behaviors and inferring parameters represent two different approaches to model calibration with different requirements on data and experimental cost.
Empirically Driven Variable Selection for the Estimation of Causal Effects with Observational Data
Keller, Bryan; Chen, Jianshen
2016-01-01
Observational studies are common in educational research, where subjects self-select or are otherwise non-randomly assigned to different interventions (e.g., educational programs, grade retention, special education). Unbiased estimation of a causal effect with observational data depends crucially on the assumption of ignorability, which specifies…
Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model
Kirkegaard, Poul Henning
1993-01-01
Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...
Yang, Lang; Palermo, Lisa; Black, Dennis M; Eastell, Richard
2014-12-01
A bone fractures only when loaded beyond its strength. The purpose of this study was to determine the association of femoral strength, as estimated by finite element (FE) analysis of dual-energy X-ray absorptiometry (DXA) scans, with incident hip fracture in comparison to hip bone mineral density (BMD), Fracture Risk Assessment Tool (FRAX), and hip structure analysis (HSA) variables. This prospective case-cohort study included a random sample of 1941 women and 668 incident hip fracture cases (295 in the random sample) during a mean ± SD follow-up of 12.8 ± 5.7 years from the Study of Osteoporotic Fractures (n = 7860 community-dwelling women ≥67 years of age). We analyzed the baseline DXA scans (Hologic 1000) of the hip using a validated plane-stress, linear-elastic finite element (FE) model of the proximal femur and estimated the femoral strength during a simulated sideways fall. Cox regression accounting for the case-cohort design assessed the association of estimated femoral strength with hip fracture. The age-body mass index (BMI)-adjusted hazard ratio (HR) per SD decrease for estimated strength (2.21; 95% CI, 1.95-2.50) was greater than that for total hip (TH) BMD (1.86; 95% CI, 1.67-2.08; p 0.05), FRAX scores (range, 1.32-1.68; p hip BMD or FRAX scores. The association of estimated strength with incident hip fracture was strong (Harrell's C index 0.770), significantly better than TH BMD (0.759; p 0.05). Similar findings were obtained for intracapsular and extracapsular fractures. In conclusion, the estimated femoral strength from FE analysis of DXA scans is an independent predictor and performs at least as well as FN BMD in predicting incident hip fracture in postmenopausal women. © 2014 American Society for Bone and Mineral Research.
Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-04-01
Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with
Modified ADALINE algorithm for harmonic estimation and selective harmonic elimination in inverters
Vasumathi, B.; Moorthi, S.
2011-11-01
In digital signal processing, algorithms are very well developed for the estimation of harmonic components. In power electronic applications, an objective like fast response of a system is of primary importance. An effective method for the estimation of instantaneous harmonic components, along with conventional harmonic elimination technique, is presented in this article. The primary function is to eliminate undesirable higher harmonic components from the selected signal (current or voltage) and it requires only the knowledge of the frequency of the component to be eliminated. A signal processing technique using modified ADALINE algorithm has been proposed for harmonic estimation. The proposed method stays effective as it converges to a minimum error and brings out a finer estimation. A conventional control based on pulse width modulation for selective harmonic elimination is used to eliminate harmonic components after its estimation. This method can be applied to a wide range of equipment. The validity of the proposed method to estimate and eliminate voltage harmonics is proved with a dc/ac inverter as a simulation example. Then, the results are compared with existing ADALINE algorithm for illustrating its effectiveness.
Shanbhogue, Vikram V; Hansen, Stinus; Folkestad, Lars;
2015-01-01
at the distal radius and tibia were placed in a constant proportion to the entire length of the bone in both patients and healthy volunteers. In age- and weight-adjusted models, HR patients had significantly higher total bone cross-sectional areas (radius 36%, tibia 20%; both p ... impact of lower vBMD and trabecular number on bone strength seems to be compensated by an increase in bone diameter, resulting in HR patients having normal estimates of bone strength. © 2014 American Society for Bone and Mineral Research....
Kusumoto, Yasuaki; Takaki, Kenji; Matsuda, Tadamitsu; Nitta, Osamu
2016-06-01
[Purpose] The aim of this study was to investigate differences in selective voluntary motor control of the lower extremities by objective assessment and determine the relationship between selective voluntary motor control and knee extensor strength in children with spastic diplegia. [Subjects and Methods] Forty individuals who had spastic cerebral palsy, with Gross Motor Function Classification System levels ranging from I to III, were assessed using the Selective Control Assessment of the Lower Extremity and by testing the maximum knee extensor strength. The unaffected side was defined as the lower limb with the higher score, and the affected side was defined as the lower limb with the lower score. [Results] The Selective Control Assessment of the Lower Extremity score on the affected side had a lower average than that on the unaffected side. The scores showed a significant inverse correlation with the maximum knee extensor strength. [Conclusion] There was bilateral difference in the selective voluntary motor control of the lower extremities in children with spastic diplegia, and the selective voluntary motor control of the lower extremity was related to maximum knee extensor strength.
U.S. Department of Health & Human Services — 2010-2015. U.S. Census Annual Estimates of the Resident Population for Selected Age Groups by Sex for the United States. The estimates are based on the 2010 Census...
U.S. Department of Health & Human Services — 2010-2015. U.S. Census Annual Estimates of the Resident Population for Selected Age Groups by Sex for the United States. The estimates are based on the 2010 Census...
A proposed selection index for feedlot profitability based on estimated breeding values.
van der Westhuizen, R R; van der Westhuizen, J
2009-04-22
It is generally accepted that feed intake and growth (gain) are the most important economic components when calculating profitability in a growth test or feedlot. We developed a single post-weaning growth (feedlot) index based on the economic values of different components. Variance components, heritabilities and genetic correlations for and between initial weight (IW), final weight (FW), feed intake (FI), and shoulder height (SHD) were estimated by multitrait restricted maximum likelihood procedures. The estimated breeding values (EBVs) and the economic values for IW, FW and FI were used in a selection index to estimate a post-weaning or feedlot profitability value. Heritabilities for IW, FW, FI, and SHD were 0.41, 0.40, 0.33, and 0.51, respectively. The highest genetic correlations were 0.78 (between IW and FW) and 0.70 (between FI and FW). EBVs were used in a selection index to calculate a single economical value for each animal. This economic value is an indication of the gross profitability value or the gross test value (GTV) of the animal in a post-weaning growth test. GTVs varied between -R192.17 and R231.38 with an average of R9.31 and a standard deviation of R39.96. The Pearson correlations between EBVs (for production and efficiency traits) and GTV ranged from -0.51 to 0.68. The lowest correlation (closest to zero) was 0.26 between the Kleiber ratio and GTV. Correlations of 0.68 and -0.51 were estimated between average daily gain and GTV and feed conversion ratio and GTV, respectively. These results showed that it is possible to select for GTV. The selection index can benefit feedlotting in selecting offspring of bulls with high GTVs to maximize profitability.
Bhargava, Kapilesh, E-mail: kapilesh_66@yahoo.co.u [Architecture and Civil Engineering Division, Bhabha Atomic Research Center, Trombay, Mumbai 400 085 (India); Mori, Yasuhiro [Graduate School of Environmental Studies, Nagoya University, Nagoya 464-8603 (Japan); Ghosh, A.K. [Reactor Safety Division, Bhabha Atomic Research Center, Trombay, Mumbai 400 085 (India)
2011-05-15
Research highlights: Predictive models for corrosion-induced damages in RC structures. Formulations for time-dependent flexural and shear strengths of corroded RC beams. Methodology for mean and c.o.v. for time-dependent strengths of corroded RC beams. Simple estimation of mean and c.o.v. for flexural strength with loss of bond. - Abstract: The structural deterioration of reinforced concrete (RC) structures due to reinforcement corrosion is a major worldwide problem. Damages to RC structures due to reinforcement corrosion manifest in the form of expansion, cracking and eventual spalling of the cover concrete; thereby resulting in serviceability and durability degradation of such structures. In addition to loss of cover, RC structure may suffer structural damages due to loss of reinforcement cross-sectional area, and loss of bond between corroded reinforcement and surrounding cracked concrete, sometimes to the extent that the structural failure becomes inevitable. This paper forms the first part of a study which addresses time-dependent reliability analyses of RC beams affected by reinforcement corrosion. In this paper initially the predictive models are presented for the quantitative assessment of time-dependent damages in RC beams, recognized as loss of mass and cross-sectional area of reinforcing bar, loss of concrete section owing to the peeling of cover concrete, and loss of bond between corroded reinforcement and surrounding cracked concrete. Then these models have been used to present analytical formulations for evaluating time-dependent flexural and shear strengths of corroded RC beams, based on the standard composite mechanics expressions for RC sections. Further by considering variability in the identified basic variables that could affect the time-dependent strengths of corrosion-affected RC beams, the estimation of statistical descriptions for the time-dependent strengths is presented for a typical simply supported RC beam. The statistical descriptions
Duckham, Rachel L; Baxter-Jones, Adam D G; Johnston, James D; Vatanparast, Hassanali; Cooper, David; Kontulainen, Saija
2014-02-01
The long-term benefits of habitual physical activity during adolescence on adult bone structure and strength are poorly understood. We investigated whether physically active adolescents had greater bone size, density, content, and estimated bone strength in young adulthood when compared to their peers who were inactive during adolescence. Peripheral quantitative computed tomography (pQCT) was used to measure the tibia and radius of 122 (73 females) participants (age mean ± SD, 29.3 ± 2.3 years) of the Saskatchewan Pediatric Bone Mineral Accrual Study (PBMAS). Total bone area (ToA), cortical density (CoD), cortical area (CoA), cortical content (CoC), and estimated bone strength in torsion (SSIp ) and muscle area (MuA) were measured at the diaphyses (66% tibia and 65% radius). Total density (ToD), trabecular density (TrD), trabecular content (TrC), and estimated bone strength in compression (BSIc ) were measured at the distal ends (4%). Participants were grouped by their adolescent physical activity (PA) levels (inactive, average, and active) based on mean PA Z-scores obtained from serial questionnaire assessments completed during adolescence. We compared adult bone outcomes across adolescent PA groups in each sex using analysis of covariance followed by post hoc pairwise comparisons with Bonferroni adjustments. When adjusted for adult height, MuA, and PA, adult males who were more physically active than their peers in adolescence had 13% greater adjusted torsional bone strength (SSIp , p adolescence had 10% larger adjusted CoA (p adolescence seemed to persist into young adulthood, with greater ToA and SSIp in males, and greater CoA, CoC, and TrC in females.
Herrero-Medrano, J M; Mathur, P K; ten Napel, J; Rashidi, H; Alexandri, P; Knol, E F; Mulder, H A
2015-04-01
Robustness is an important issue in the pig production industry. Since pigs from international breeding organizations have to withstand a variety of environmental challenges, selection of pigs with the inherent ability to sustain their productivity in diverse environments may be an economically feasible approach in the livestock industry. The objective of this study was to estimate genetic parameters and breeding values across different levels of environmental challenge load. The challenge load (CL) was estimated as the reduction in reproductive performance during different weeks of a year using 925,711 farrowing records from farms distributed worldwide. A wide range of levels of challenge, from favorable to unfavorable environments, was observed among farms with high CL values being associated with confirmed situations of unfavorable environment. Genetic parameters and breeding values were estimated in high- and low-challenge environments using a bivariate analysis, as well as across increasing levels of challenge with a random regression model using Legendre polynomials. Although heritability estimates of number of pigs born alive were slightly higher in environments with extreme CL than in those with intermediate levels of CL, the heritabilities of number of piglet losses increased progressively as CL increased. Genetic correlations among environments with different levels of CL suggest that selection in environments with extremes of low or high CL would result in low response to selection. Therefore, selection programs of breeding organizations that are commonly conducted under favorable environments could have low response to selection in commercial farms that have unfavorable environmental conditions. Sows that had experienced high levels of challenge at least once during their productive life were ranked according to their EBV. The selection of pigs using EBV ignoring environmental challenges or on the basis of records from only favorable environments
Ying Li
Full Text Available Vegetation is an important part of ecosystem and estimation of fractional vegetation cover is of significant meaning to monitoring of vegetation growth in a certain region. With Landsat TM images and HJ-1B images as data source, an improved selective endmember linear spectral mixture model (SELSMM was put forward in this research to estimate the fractional vegetation cover in Huangfuchuan watershed in China. We compared the result with the vegetation coverage estimated with linear spectral mixture model (LSMM and conducted accuracy test on the two results with field survey data to study the effectiveness of different models in estimation of vegetation coverage. Results indicated that: (1 the RMSE of the estimation result of SELSMM based on TM images is the lowest, which is 0.044. The RMSEs of the estimation results of LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.052, 0.077 and 0.082, which are all higher than that of SELSMM based on TM images; (2 the R2 of SELSMM based on TM images, LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.668, 0.531, 0.342 and 0.336. Among these models, SELSMM based on TM images has the highest estimation accuracy and also the highest correlation with measured vegetation coverage. Of the two methods tested, SELSMM is superior to LSMM in estimation of vegetation coverage and it is also better at unmixing mixed pixels of TM images than pixels of HJ-1B images. So, the SELSMM based on TM images is comparatively accurate and reliable in the research of regional fractional vegetation cover estimation.
Li, Ying; Wang, Hong; Li, Xiao Bing
2015-01-01
Vegetation is an important part of ecosystem and estimation of fractional vegetation cover is of significant meaning to monitoring of vegetation growth in a certain region. With Landsat TM images and HJ-1B images as data source, an improved selective endmember linear spectral mixture model (SELSMM) was put forward in this research to estimate the fractional vegetation cover in Huangfuchuan watershed in China. We compared the result with the vegetation coverage estimated with linear spectral mixture model (LSMM) and conducted accuracy test on the two results with field survey data to study the effectiveness of different models in estimation of vegetation coverage. Results indicated that: (1) the RMSE of the estimation result of SELSMM based on TM images is the lowest, which is 0.044. The RMSEs of the estimation results of LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.052, 0.077 and 0.082, which are all higher than that of SELSMM based on TM images; (2) the R2 of SELSMM based on TM images, LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.668, 0.531, 0.342 and 0.336. Among these models, SELSMM based on TM images has the highest estimation accuracy and also the highest correlation with measured vegetation coverage. Of the two methods tested, SELSMM is superior to LSMM in estimation of vegetation coverage and it is also better at unmixing mixed pixels of TM images than pixels of HJ-1B images. So, the SELSMM based on TM images is comparatively accurate and reliable in the research of regional fractional vegetation cover estimation.
Jiang, M; Fan, W L; Xing, S Y; Wang, J; Li, P; Liu, R R; Li, Q H; Zheng, M Q; Cui, H X; Wen, J; Zhao, G P
2017-02-01
Intramuscular fat (IMF) content contributes to meat flavor and improves meat quality. Excessive abdominal fat, however, leads to a waste of feed resources. Here, an independent up-selection for IMF was used as a control (Line C), and a balanced selection program, with up-selection for IMF and down-selection AFP (Line B), was studied in JingXing yellow chickens. The mean of IMF and AFP within a family was the phenotypic value upon which selection was based. The selective pressures of IMF in line B and line C were the same in each generation. At G5, the IMF was significantly higher (P 0.05). IMF increased by 11.4% and AFP decreased by 1.5% in Line B compared with the G0 generation. In contrast, the IMF increased by 17.6%, but was accompanied by an 18.7% increase in AFP, in control Line C. Of 10 other traits measured, body weight at 56 d age (BW56) and the percentages of eviscerated weight (EWP) showed a significant difference between the 2 lines (P IMF and AFP, estimated by the DMU package, were 0.16 and 0.32, respectively. A moderate positive correlation existed between IMF and AFP (0.35). A balanced selection program for increasing IMF while controlling AFP (Line B) is shown here to be effective in practical chicken breeding. © 2016 Poultry Science Association Inc.
Verma, Geeta; Trehan, Mridula; Sharma, Sunil
2013-09-01
To measure and compare the shear bond strength and adhesive remnant index of light-cure composite. (Enlight, Ormco.) and dual-cure composite (Phase II dual cure, Reliance Ortho). Sixty extracted human premolar teeth were divided into two groups: group I (blue): conventional light cure composite resin. (Enlight, Ormco.) and group II (green): dual cure composite resin. (Phase II dual cure, Reliance Ortho.) with 30 teeth in each group. These samples were tested on the universal testing machine to measure the shear bond strength. Student t-test showed that the mean shear bond strength of the conventional light cure group (8.54 MPa - 10.42 MPa) was significantly lower than dual cure group (10.45 MPa -12.17 MPa). These findings indicate that the shear bond strength of dual-cure composite resin (Phase II dual cure, Reliance Ortho) is comparatively higher than conventional light-cure composite resin (Enlight, Ormco). In the majority of the samples, adhesive remnant index (ARI) scores were 4 and 5 in both the groups whereas score 1 is attained by the least number of samples in both the groups. How to cite this article: Verma G, Trehan M, Sharma S. Comparison of Shear Bond Strength and Estimation of Adhesive Remnant Index between Light-cure Composite and Dual-cure Composite: An in vitro Study. Int J Clin Pediatr Dent 2013;6(3):166-170.
Estimation Risk Modeling in Optimal Portfolio Selection: An Empirical Study from Emerging Markets
Sarayut Nathaphan
2010-01-01
Full Text Available Efficient portfolio is a portfolio that yields maximum expected return given a level of risk or has a minimum level of risk given a level of expected return. However, the optimal portfolios do not seem to be as efficient as intended. Especially during financial crisis period, optimal portfolio is not an optimal investment as it does not yield maximum return given a specific level of risk, and vice versa. One possible explanation for an unimpressive performance of the seemingly efficient portfolio is incorrectness in parameter estimates called “estimation risk in parameter estimates”. Six different estimating strategies are employed to explore ex-post-portfolio performance when estimation risk is incorporated. These strategies are traditional Mean-Variance (EV, Adjusted Beta (AB approach, Resampled Efficient Frontier (REF, Capital Asset Pricing Model (CAPM, Single Index Model (SIM, and Single Index Model incorporating shrinkage Bayesian factor namely, Bayesian Single Index Model (BSIM. Among the six alternative strategies, shrinkage estimators incorporating the single index model outperform other traditional portfolio selection strategies. Allowing for asset mispricing and applying Bayesian shrinkage adjusted factor to each asset's alpha, a single factor namely, excess market return is adequate in alleviating estimation uncertainty.
Gao, Wenqing; Reiser, Peter J.; Coss, Christopher C.; Phelps, Mitch A.; Kearbey, Jeffrey D.; Miller, Duane D.; Dalton, James T.
2007-01-01
The partial agonist activity of a selective androgen receptor modulator (SARM) in the prostate was demonstrated in orchidectomized rats. In the current study, we characterized the full agonist activity of S-3-(4-acetylamino-phenoxy)-2-hydroxy-2-methyl-N-(4-nitro-3-trifluoromethyl-phenyl)-propionamide (a structurally related SARM referred to in other publications and hereafter as S-4) in skeletal muscle, bone, and pituitary of castrated male rats. Twelve weeks after castration, animals were treated with S-4 (3 or 10 mg/kg), dihydrotestosterone (DHT) (3 mg/kg), or vehicle for 8 wk. S-4 (3 and 10 mg/kg) restored soleus muscle mass and strength and levator ani muscle mass to that seen in intact animals. Similar changes were also observed in DHT-treated (3 mg/kg) animals. Compared with the anabolic effects observed in muscle, DHT (3 mg/kg) stimulated prostate and seminal vesicle weights moire than 2-fold greater than that observed in intact controls, whereas S-4 (3 mg/kg) returned these androgenic organs to only 16 and 17%, respectively, of the control levels. S-4 (3 and 10 mg/kg) and DHT (3 mg/kg) restored castration-induced loss in lean body mass. Furthermore, S-4 treatment caused a significantly larger increase in total body bone mineral density than DHT. S-4 (3 and 10 mg/kg) also demonstrated agonist activity in the pituitary and significantly decreased plasma LH and FSH levels in castrated animals in a dose-dependent manner. In summary, the strong anabolic effects of S-4 in skeletal muscle, bone, and pituitary were achieved with minimal pharmacologic effect in the prostate. The tissue-selective pharmacologic activity of SARMs provides obvious advantages over steroidal androgen therapy and demonstrates the promising therapeutic utility that this new class of drugs may hold. PMID:16099859
Wyss, Richard; Girman, Cynthia J; LoCasale, Robert J; Brookhart, Alan M; Stürmer, Til
2013-01-01
It is often preferable to simplify the estimation of treatment effects on multiple outcomes by using a single propensity score (PS) model. Variable selection in PS models impacts the efficiency and validity of treatment effects. However, the impact of different variable selection strategies on the estimated treatment effects in settings involving multiple outcomes is not well understood. The authors use simulations to evaluate the impact of different variable selection strategies on the bias and precision of effect estimates to provide insight into the performance of various PS models in settings with multiple outcomes. Simulated studies consisted of dichotomous treatment, two Poisson outcomes, and eight standard-normal covariates. Covariates were selected for the PS models based on their effects on treatment, a specific outcome, or both outcomes. The PSs were implemented using stratification, matching, and weighting (inverse probability treatment weighting). PS models including only covariates affecting a specific outcome (outcome-specific models) resulted in the most efficient effect estimates. The PS model that only included covariates affecting either outcome (generic-outcome model) performed best among the models that simultaneously controlled measured confounding for both outcomes. Similar patterns were observed over the range of parameter values assessed and all PS implementation methods. A single, generic-outcome model performed well compared with separate outcome-specific models in most scenarios considered. The results emphasize the benefit of using prior knowledge to identify covariates that affect the outcome when constructing PS models and support the potential to use a single, generic-outcome PS model when multiple outcomes are being examined. Copyright © 2012 John Wiley & Sons, Ltd.
The influence of selection for protein stability on dN/dS estimations.
Dasmeh, Pouria; Serohijos, Adrian W R; Kepp, Kasper P; Shakhnovich, Eugene I
2014-10-28
Understanding the relative contributions of various evolutionary processes-purifying selection, neutral drift, and adaptation-is fundamental to evolutionary biology. A common metric to distinguish these processes is the ratio of nonsynonymous to synonymous substitutions (i.e., dN/dS) interpreted from the neutral theory as a null model. However, from biophysical considerations, mutations have non-negligible effects on the biophysical properties of proteins such as folding stability. In this work, we investigated how stability affects the rate of protein evolution in phylogenetic trees by using simulations that combine explicit protein sequences with associated stability changes. We first simulated myoglobin evolution in phylogenetic trees with a biophysically realistic approach that accounts for 3D structural information and estimates of changes in stability upon mutation. We then compared evolutionary rates inferred directly from simulation to those estimated using maximum-likelihood (ML) methods. We found that the dN/dS estimated by ML methods (ωML) is highly predictive of the per gene dN/dS inferred from the simulated phylogenetic trees. This agreement is strong in the regime of high stability where protein evolution is neutral. At low folding stabilities and under mutation-selection balance, we observe deviations from neutrality (per gene dN/dS > 1 and dN/dS dN/dS is robust to these deviations, ML tests for positive selection detect statistically significant per site dN/dS > 1. Altogether, we show how protein biophysics affects the dN/dS estimations and its subsequent interpretation. These results are important for improving the current approaches for detecting positive selection. © The Author(s) 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
A New Adaptive Channel Estimation for Frequency Selective Time Varying Fading OFDM Channels
Afifi, Wessam M
2010-01-01
In this paper a new algorithm for adaptive dynamic channel estimation for frequency selective time varying fading OFDM channels is proposed. The new algorithm adopts a new strategy that successfully increases OFDM symbol rate. Instead of using a fixed training pilot sequence, the proposed algorithm uses a logic controller to choose among several available training patterns. The controller choice is based on the cross-correlation between pilot symbols over two consecutive time instants (which is considered to be a suitable measure of channel stationarity) as well as the deviation from the desired BER. Simulation results of the system performance confirm the effectiveness of this new channel estimation technique over traditional non-adaptive estimation methods in increasing the data rate of OFDM symbols while maintaining the same probability of error.
Channel Selection and Feature Projection for Cognitive Load Estimation Using Ambulatory EEG
Tian Lan
2007-01-01
Full Text Available We present an ambulatory cognitive state classification system to assess the subject's mental load based on EEG measurements. The ambulatory cognitive state estimator is utilized in the context of a real-time augmented cognition (AugCog system that aims to enhance the cognitive performance of a human user through computer-mediated assistance based on assessments of cognitive states using physiological signals including, but not limited to, EEG. This paper focuses particularly on the offline channel selection and feature projection phases of the design and aims to present mutual-information-based techniques that use a simple sample estimator for this quantity. Analyses conducted on data collected from 3 subjects performing 2 tasks (n-back/Larson at 2 difficulty levels (low/high demonstrate that the proposed mutual-information-based dimensionality reduction scheme can achieve up to 94% cognitive load estimation accuracy.
Huang, Jian
2011-01-01
The $\\ell_1$-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted $\\ell_1$-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted $\\ell_1$-penalized estimator in sparse, high-dimensional settings where the number of predictors $p$ can be much larger than the sample size $n$. Adaptive Lasso is considered as a special case. A multistage method is developed to apply an adaptive Lasso recursively. We provide $\\ell_q$ oracle inequalities, a general selection consistency theorem, and an upper bound on the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear mod...
Evaluation of spectral coherence estimation methods for endmembers selection in hyperspectral images
David Fernandes
2005-08-01
Full Text Available The right endmenbers choice is an important task in the hiperspectral image classification processes. Among several models that use the endmenbers there is the Linear Spectral Mixture (LSM. This model has been extensively used in the fractional abundance images estimation. This work proposes two semi supervised methods for endmenbers selection based in the spectral coherence selection, which is an extension of the correlation coefficient concept. Spectral samples associated to classes are choosed a priori. These candidate samples to endmembers are compared by its relative spectral coherence. A subset of samples with minimum relative coherence will be selected as final endmenbers. AVIRIS (Airborne Visible/InfraRed Imaging Spectrometer images are used to test and compare the proposed methods.
Wang, Huai-Chun; Susko, Edward; Roger, Andrew J
2014-04-01
Standard protein phylogenetic models use fixed rate matrices of amino acid interchange derived from analyses of large databases. Differences between the stationary amino acid frequencies of these rate matrices from those of a data set of interest are typically adjusted for by matrix multiplication that converts the empirical rate matrix to an exchangeability matrix which is then postmultiplied by the amino acid frequencies in the alignment. The result is a time-reversible rate matrix with stationary amino acid frequencies equal to the data set frequencies. On the basis of population genetics principles, we develop an amino acid substitution-selection model that parameterizes the fitness of an amino acid as the logarithm of the ratio of the frequency of the amino acid to the frequency of the same amino acid under no selection. The model gives rise to a different sequence of matrix multiplications to convert an empirical rate matrix to one that has stationary amino acid frequencies equal to the data set frequencies. We incorporated the substitution-selection model with an improved amino acid class frequency mixture (cF) model to partially take into account site-specific amino acid frequencies in the phylogenetic models. We show that 1) the selection models fit data significantly better than corresponding models without selection for most of the 21 test data sets; 2) both cF and cF selection models favored the phylogenetic trees that were inferred under current sophisticated models and methods for three difficult phylogenetic problems (the positions of microsporidia and breviates in eukaryote phylogeny and the position of the root of the angiosperm tree); and 3) for data simulated under site-specific residue frequencies, the cF selection models estimated trees closer to the generating trees than a standard Г model or cF without selection. We also explored several ways of estimating amino acid frequencies under neutral evolution that are required for these selection
Florian Schellenberg
2015-01-01
Full Text Available Background. Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. Methods. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Results. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. Conclusion. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines.
Katherine A. Zeller; Kevin McGarigal; Paul Beier; Samuel A. Cushman; T. Winston Vickers; Walter M. Boyce
2014-01-01
Estimating landscape resistance to animal movement is the foundation for connectivity modeling, and resource selection functions based on point data are commonly used to empirically estimate resistance. In this study, we used GPS data points acquired at 5-min intervals from radiocollared pumas in southern California to model context-dependent point selection...
Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals.
Engemann, Denis A; Gramfort, Alexandre
2015-03-01
Magnetoencephalography and electroencephalography (M/EEG) measure non-invasively the weak electromagnetic fields induced by post-synaptic neural currents. The estimation of the spatial covariance of the signals recorded on M/EEG sensors is a building block of modern data analysis pipelines. Such covariance estimates are used in brain-computer interfaces (BCI) systems, in nearly all source localization methods for spatial whitening as well as for data covariance estimation in beamformers. The rationale for such models is that the signals can be modeled by a zero mean Gaussian distribution. While maximizing the Gaussian likelihood seems natural, it leads to a covariance estimate known as empirical covariance (EC). It turns out that the EC is a poor estimate of the true covariance when the number of samples is small. To address this issue the estimation needs to be regularized. The most common approach downweights off-diagonal coefficients, while more advanced regularization methods are based on shrinkage techniques or generative models with low rank assumptions: probabilistic PCA (PPCA) and factor analysis (FA). Using cross-validation all of these models can be tuned and compared based on Gaussian likelihood computed on unseen data. We investigated these models on simulations, one electroencephalography (EEG) dataset as well as magnetoencephalography (MEG) datasets from the most common MEG systems. First, our results demonstrate that different models can be the best, depending on the number of samples, heterogeneity of sensor types and noise properties. Second, we show that the models tuned by cross-validation are superior to models with hand-selected regularization. Hence, we propose an automated solution to the often overlooked problem of covariance estimation of M/EEG signals. The relevance of the procedure is demonstrated here for spatial whitening and source localization of MEG signals.
Jefferson Fagundes Loss
2008-01-01
Full Text Available The objectives of this study were to: (1 evaluate the resistive torque of an open kinetic chain strength-training machine for performing knee extensions, and (2 perform an analysis estimating internal forces in the tibiofemoral joint. During a fi rst phase of the study, measurements were taken of the machine under analysis (external forces, and then calculations were performed to estimate forces on the lower limb (internal forces. Equations were defi ned to calculate human force (HF, and the moment of muscular force (MMF. Perpendicular muscular force (MFp and joint force (JFp, axial muscular force (MF” and joint force (JF”, and total muscular force (MF and joint force (JF were all calculated. Five knee angles were analyzed (zero, 30, 45, 60, and 90 degrees. A reduction was observed in HF at higher knee angles, while MF and JF also increased at the same time. HF was always lower than the load selected on the machine, which indicates a reduced overload imposed by the machine. The reduction observed in MFp and JFp at higher knee angles indicates a lower tendency to shear the tibia in relation to the femur. At the same time, there was an increase in JF” due to higher MF”. The biomechanical model proposed in this study has shown itself adequate for the day-to-day needs of professionals who supervise orient strength training. resumo Os objetivos do presente estudo foram: (1 avaliar o torque de resistência (TR de uma máquina de musculação para a realização do exercício de extensão dos joelhos em cadeia cinética aberta e (2 realizar um ensaio teórico a partir do comportamento do TR com o intuito de estimar as forças internas na articulação tíbio-femoral. O estudo foi realizado em dois momentos: (1 medições da máquina utilizada e (2 estimativa das forças externas (na máquina e internas (no membro inferior. Foram utilizadas equações baseadas em um modelo mecânico bi-dimensional para o cálculo das componentes perpendiculares
Modulation depth estimation and variable selection in state-space models for neural interfaces.
Malik, Wasim Q; Hochberg, Leigh R; Donoghue, John P; Brown, Emery N
2015-02-01
Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems.
Deforche, Koen; Cozzi-Lepri, Alessandro; Theys, Kristof
2008-01-01
BACKGROUND: A method has been developed to estimate a fitness landscape experienced by HIV-1 under treatment selective pressure as a function of the genotypic sequence thereby also estimating the genetic barrier to resistance. METHODS: We evaluated the performance of two estimated fitness landsca...
Ian G Handel
Full Text Available Current post-epidemic sero-surveillance uses random selection of animal holdings. A better strategy may be to estimate the benefits gained by sampling each farm and use this to target selection. In this study we estimate the probability of undiscovered infection for sheep farms in Devon after the 2001 foot-and-mouth disease outbreak using the combination of a previously published model of daily infection risk and a simple model of probability of discovery of infection during the outbreak. This allows comparison of the system sensitivity (ability to detect infection in the area of arbitrary, random sampling compared to risk-targeted selection across a full range of sampling budgets. We show that it is possible to achieve 95% system sensitivity by sampling, on average, 945 farms with random sampling and 184 farms with risk-targeted sampling. We also examine the effect of ordering samples by risk to expedite return to a disease-free status. Risk ordering the sampling process results in detection of positive farms, if present, 15.6 days sooner than with randomly ordered sampling, assuming 50 farms are tested per day.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Joint variable and rank selection for parsimonious estimation of high dimensional matrices
Bunea, Florentina; Wegkamp, Marten
2011-01-01
This article is devoted to optimal dimension reduction methods for sparse, high dimensional multivariate response regression models. Both the number of responses and that of the predictors may exceed the sample size. Sometimes viewed as complementary, predictor selection and rank reduction are the most popular strategies for obtaining lower dimensional approximations of the parameter matrix in such models. We show in this article that important gains in prediction accuracy can be obtained by considering them jointly. For this, we first motivate a new class of sparse multivariate regression models, in which the coefficient matrix has low rank {\\bf and} zero rows or can be well approximated by such a matrix. Then, we introduce estimators that are based on penalized least squares, with novel penalties that impose simultaneous row and rank restrictions on the coefficient matrix. We prove that these estimators indeed adapt to the unknown matrix sparsity and have fast rates of convergence. We support our theoretica...
Brunner, Robert
2014-04-01
In a series of two contributions, decisive business-related aspects of the current process status to transfer research results on diffractive optical elements (DOEs) into commercial solutions are discussed. In part I, the focus was on the patent landscape. Here, in part II, market estimations concerning DOEs for selected applications are presented, comprising classical spectroscopic gratings, security features on banknotes, DOEs for high-end applications, e.g., for the semiconductor manufacturing market and diffractive intra-ocular lenses. The derived market sizes are referred to the optical elements, itself, rather than to the enabled instruments. The estimated market volumes are mainly addressed to scientifically and technologically oriented optical engineers to serve as a rough classification of the commercial dimensions of DOEs in the different market segments and do not claim to be exhaustive.
Selective Segmentation for Global Optimization of Depth Estimation in Complex Scenes
Sheng Liu
2013-01-01
Full Text Available This paper proposes a segmentation-based global optimization method for depth estimation. Firstly, for obtaining accurate matching cost, the original local stereo matching approach based on self-adapting matching window is integrated with two matching cost optimization strategies aiming at handling both borders and occlusion regions. Secondly, we employ a comprehensive smooth term to satisfy diverse smoothness request in real scene. Thirdly, a selective segmentation term is used for enforcing the plane trend constraints selectively on the corresponding segments to further improve the accuracy of depth results from object level. Experiments on the Middlebury image pairs show that the proposed global optimization approach is considerably competitive with other state-of-the-art matching approaches.
Baudry, Jean-Patrick
2012-01-01
The Integrated Completed Likelihood (ICL) criterion has been proposed by Biernacki et al. (2000) in the model-based clustering framework to select a relevant number of classes and has been used by statisticians in various application areas. A theoretical study of this criterion is proposed. A contrast related to the clustering objective is introduced: the conditional classification likelihood. This yields an estimator and a model selection criteria class. The properties of these new procedures are studied and ICL is proved to be an approximation of one of these criteria. We oppose these results to the current leading point of view about ICL, that it would not be consistent. Moreover these results give insights into the class notion underlying ICL and feed a reflection on the class notion in clustering. General results on penalized minimum contrast criteria and on mixture models are derived, which are interesting in their own right.
The influence of selection for protein stability on dN/dS estimations
Dasmeh, Pouria; Serohijos, Adrian W. R.; Kepp, Kasper Planeta;
2014-01-01
from the neutral theory as a null model. However, from biophysical considerations, mutations have non-negligible effects on the biophysical properties of proteins such as folding stability. In this work, we investigated how stability affects the rate of protein evolution in phylogenetic trees by using...... simulations that combine explicit protein sequences with associated stability changes. We first simulated myoglobin evolution in phylogenetic trees with a biophysically realistic approach that accounts for 3D structural information and estimates of changes in stability upon mutation. We then compared...... stability where protein evolution is neutral. At low folding stabilities and under mutation-selection balance, we observe deviations from neutrality (per gene dN/dS > 1 and dN/dS positive selection detect statistically...
Kaushik Halder
2015-01-01
Summary and Conclusion: Hatha yoga can improve anthropometric characteristics, muscular strength and flexibility among volunteers of different age group and can also be helpful in preventing and attenuating age related deterioration of these parameters.
Methods for estimating selected low-flow frequency statistics for unregulated streams in Kentucky
Martin, Gary R.; Arihood, Leslie D.
2010-01-01
This report provides estimates of, and presents methods for estimating, selected low-flow frequency statistics for unregulated streams in Kentucky including the 30-day mean low flows for recurrence intervals of 2 and 5 years (30Q2 and 30Q5) and the 7-day mean low flows for recurrence intervals of 5, 10, and 20 years (7Q2, 7Q10, and 7Q20). Estimates of these statistics are provided for 121 U.S. Geological Survey streamflow-gaging stations with data through the 2006 climate year, which is the 12-month period ending March 31 of each year. Data were screened to identify the periods of homogeneous, unregulated flows for use in the analyses. Logistic-regression equations are presented for estimating the annual probability of the selected low-flow frequency statistics being equal to zero. Weighted-least-squares regression equations were developed for estimating the magnitude of the nonzero 30Q2, 30Q5, 7Q2, 7Q10, and 7Q20 low flows. Three low-flow regions were defined for estimating the 7-day low-flow frequency statistics. The explicit explanatory variables in the regression equations include total drainage area and the mapped streamflow-variability index measured from a revised statewide coverage of this characteristic. The percentage of the station low-flow statistics correctly classified as zero or nonzero by use of the logistic-regression equations ranged from 87.5 to 93.8 percent. The average standard errors of prediction of the weighted-least-squares regression equations ranged from 108 to 226 percent. The 30Q2 regression equations have the smallest standard errors of prediction, and the 7Q20 regression equations have the largest standard errors of prediction. The regression equations are applicable only to stream sites with low flows unaffected by regulation from reservoirs and local diversions of flow and to drainage basins in specified ranges of basin characteristics. Caution is advised when applying the equations for basins with characteristics near the
Lin, Wei-Shao; Ercoli, Carlo; Feng, Changyong; Morton, Dean
2012-07-01
The objective of this study was to compare the effect of veneering porcelain (monolithic or bilayer specimens) and core fabrication technique (heat-pressed or CAD/CAM) on the biaxial flexural strength and Weibull modulus of leucite-reinforced and lithium-disilicate glass ceramics. In addition, the effect of veneering technique (heat-pressed or powder/liquid layering) for zirconia ceramics on the biaxial flexural strength and Weibull modulus was studied. Five ceramic core materials (IPS Empress Esthetic, IPS Empress CAD, IPS e.max Press, IPS e.max CAD, IPS e.max ZirCAD) and three corresponding veneering porcelains (IPS Empress Esthetic Veneer, IPS e.max Ceram, IPS e.max ZirPress) were selected for this study. Each core material group contained three subgroups based on the core material thickness and the presence of corresponding veneering porcelain as follows: 1.5 mm core material only (subgroup 1.5C), 0.8 mm core material only (subgroup 0.8C), and 1.5 mm core/veneer group: 0.8 mm core with 0.7 mm corresponding veneering porcelain with a powder/liquid layering technique (subgroup 0.8C-0.7VL). The ZirCAD group had one additional 1.5 mm core/veneer subgroup with 0.7 mm heat-pressed veneering porcelain (subgroup 0.8C-0.7VP). The biaxial flexural strengths were compared for each subgroup (n = 10) according to ISO standard 6872:2008 with ANOVA and Tukey's post hoc multiple comparison test (p≤ 0.05). The reliability of strength was analyzed with the Weibull distribution. For all core materials, the 1.5 mm core/veneer subgroups (0.8C-0.7VL, 0.8C-0.7VP) had significantly lower mean biaxial flexural strengths (p strength (p= 0.004) than subgroup 0.8C-0.7VP. Nonetheless, both veneered ZirCAD groups showed greater flexural strength than the monolithic Empress and e.max groups, regardless of core thickness and fabrication techniques. Comparing fabrication techniques, Empress Esthetic/CAD, e.max Press/CAD had similar biaxial flexural strength (p= 0.28 for Empress pair; p= 0
Robinson, Hugh S.; Ruth, Toni K.; Gude, Justin A.; Choate, David; DeSimone, Rich; Hebblewhite, Mark; Matchett, Marc R.; Mitchell, Michael S.; Murphy, Kerry; Williams, Jim
2015-01-01
To be most effective, the scale of wildlife management practices should match the range of a particular species’ movements. For this reason, combined with our inability to rigorously or regularly census mountain lion populations, several authors have suggested that mountain lions be managed in a source-sink or metapopulation framework. We used a combination of resource selection functions, mortality estimation, and dispersal modeling to estimate cougar population levels in Montana statewide and potential population level effects of planned harvest levels. Between 1980 and 2012, 236 independent mountain lions were collared and monitored for research in Montana. From these data we used 18,695 GPS locations collected during winter from 85 animals to develop a resource selection function (RSF), and 11,726 VHF and GPS locations from 142 animals along with the locations of 6343 mountain lions harvested from 1988–2011 to validate the RSF model. Our RSF model validated well in all portions of the State, although it appeared to perform better in Montana Fish, Wildlife and Parks (MFWP) Regions 1, 2, 4 and 6, than in Regions 3, 5, and 7. Our mean RSF based population estimate for the total population (kittens, juveniles, and adults) of mountain lions in Montana in 2005 was 3926, with almost 25% of the entire population in MFWP Region 1. Estimates based on a high and low reference population estimates produce a possible range of 2784 to 5156 mountain lions statewide. Based on a range of possible survival rates we estimated the mountain lion population in Montana to be stable to slightly increasing between 2005 and 2010 with lambda ranging from 0.999 (SD = 0.05) to 1.02 (SD = 0.03). We believe these population growth rates to be a conservative estimate of true population growth. Our model suggests that proposed changes to female harvest quotas for 2013–2015 will result in an annual statewide population decline of 3% and shows that, due to reduced dispersal, changes to
Li, Sheng; Lobb, David A; Tiessen, Kevin H D; McConkey, Brian G
2010-01-01
The fallout radionuclide cesium-137 ((137)Cs) has been successfully used in soil erosion studies worldwide. However, discrepancies often exist between the erosion rates estimated using various conversion models. As a result, there is often confusion in the use of the various models and in the interpretation of the data. Therefore, the objective of this study was to test the structural and parametrical uncertainties associated with four conversion models typically used in cultivated agricultural landscapes. For the structural uncertainties, the Soil Constituent Redistribution by Erosion Model (SCREM) was developed and used to simulate the redistribution of fallout (137)Cs due to tillage and water erosion along a simple two-dimensional (horizontal and vertical) transect. The SCREM-predicted (137)Cs inventories were then imported into the conversion models to estimate the erosion rates. The structural uncertainties of the conversion models were assessed based on the comparisons between the conversion-model-estimated erosion rates and the erosion rates determined or used in the SCREM. For the parametrical uncertainties, test runs were conducted by varying the values of the parameters used in the model, and the parametrical uncertainties were assessed based on the responsive changes of the estimated erosion rates. Our results suggest that: (i) the performance/accuracy of the conversion models was largely dependent on the relative contributions of water vs. tillage erosion; and (ii) the estimated erosion rates were highly sensitive to the input values of the reference (137)Cs level, particle size correction factors and tillage depth. Guidelines were proposed to aid researchers in selecting and applying the conversion models under various situations common to agricultural landscapes.
Kumar, Sudhir; Srinivasan, P; Sharma, S D; Mayya, Y S
2012-01-01
Measuring the strength of high dose rate (HDR) (192)Ir brachytherapy sources on receipt from the vendor is an important component of a quality assurance program. Owing to their ready availability in radiotherapy departments, the Farmer-type ionization chambers are also used to determine the strength of HDR (192)Ir brachytherapy sources. The use of a Farmer-type ionization chamber requires the estimation of the scatter correction factor along with positioning error (c) and the constant of proportionality (f) to determine the strength of HDR (192)Ir brachytherapy sources. A simplified approach based on a least squares method was developed for estimating the values of f and M(s). The seven distance method was followed to record the ionization chamber readings for parameterization of f and M(s). Analytically calculated values of M(s) were used to determine the room scatter correction factor (K(sc)). The Monte Carlo simulations were also carried out to calculate f and K(sc) to verify the magnitude of the parameters determined by the proposed analytical approach. The value of f determined using the simplified analytical approach was found to be in excellent agreement with the Monte Carlo simulated value (within 0.7%). Analytically derived values of K(sc) were also found to be in good agreement with the Monte Carlo calculated values (within 1.47%). Being far simpler than the presently available methods of evaluating f, the proposed analytical approach can be adopted for routine use by clinical medical physicists to estimate f by hand calculations.
Sample size estimation and sampling techniques for selecting a representative sample
Aamir Omair
2014-01-01
Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.
Crop yield, genetic parameter estimation and selection of sacha inchi in central Amazon
Mágno Sávio Ferreira Valente
2017-06-01
Full Text Available In Brazil, sacha inchi oil is produced by hand from plant materials with no breeding or detailed information about the chemical composition of seeds. In addition, most of the current information on the agronomic traits of this species originates from research carried out in the Peruvian Amazon. In order to promote the research and cultivation of sacha inchi in the Brazilian territory, this study aimed to analyze, in the central Amazon region, different accessions of this oilseed for characteristics of production and quality of fruits and seeds, as well as to estimate genetic parameters, through mixed models, with identification of superior accessions, for breeding purposes. A total of 37 non-domesticated accessions were evaluated in a randomized block design, with five replications and two plants per plot. The average oil content in seeds was 29.07 % and unsaturated fatty acids amounted to 91.5 % of the total fat content. For the yield traits, the estimates of individual broad-sense heritability were moderate (~0.33, while the heritability based on the average of progenies resulted in a selective accuracy of approximately 0.85. The use of the selection index provided simultaneous gains for yield traits (> 40 % and oil yield. A high genetic variability was observed for the main traits of commercial interest for the species, as well as promising perspectives for the development of superior varieties for agro-industrial use.
A simple numerical model to estimate the effect of coal selection on pulverized fuel burnout
Sun, J.K.; Hurt, R.H.; Niksa, S.; Muzio, L.; Mehta, A.; Stallings, J. [Brown University, Providence, RI (USA). Division Engineering
2003-06-01
The amount of unburned carbon in ash is an important performance characteristic in commercial boilers fired with pulverized coal. Unburned carbon levels are known to be sensitive to fuel selection, and there is great interest in methods of estimating the burnout propensity of coals based on proximate and ultimate analysis - the only fuel properties readily available to utility practitioners. A simple numerical model is described that is specifically designed to estimate the effects of coal selection on burnout in a way that is useful for commercial coal screening. The model is based on a highly idealized description of the combustion chamber but employs detailed descriptions of the fundamental fuel transformations. The model is validated against data from laboratory and pilot-scale combustors burning a range of international coals, and then against data obtained from full-scale units during periods of coal switching. The validated model form is then used in a series of sensitivity studies to explore the role of various individual fuel properties that influence burnout.
Estimation of selected properties of forest soils using near-infrared spectroscopy (NIR
Kania Mateusz
2016-03-01
Full Text Available The study was focused on the application of near-infrared spectroscopy (NIR as a tool for evaluation of selected properties of forest soils. We analysed 144 soil samples from the topsoil of nine plots located in southern Poland. Six plots were established under pine stands, and three plots under oak stands. The NIR measurements were performed using Antharis II FT scanner. On the basis of the spectrum files obtained from scanning of 96 samples and the measurement results obtained for selected properties of the soil samples, we developed a calibration model. The model was validated using 48 independent samples. We attempted to estimate the following properties of forest soils: pH, C:N ratio, the organic carbon content (Ct, total nitrogen (Nt, clay content (Clay, base cation content (BC, cation exchange capacity (CEC and total acidity (TA. We conclude that estimation of soil properties using NIR method can be applied as additional (to laboratory analysis or initial assessment of soil quality. Our results also suggest that forest species composition may affect the mathematical model applied to NIR spectra analysis, however, this hypothesis needs some of further investigations.
Leonid Serhiyenko
2015-10-01
Full Text Available Purpose: to define the methodology of carrying out the tests: standing high jump and to systematize the general notion about measuring of strength and anaerobic human abilities. Material and Methods: methods of theoretic analysis and generalization, method of search and study of scientific information were used. Results: the standing high jump classification which helps to differentiate jumps according to the way of fulfillment and estimation of the development of motor abilities ware founded. Conclusion: the methodology of doing different kinds of jumps is described
Estimates of internal-dose equivalent from inhalation and ingestion of selected radionuclides
Dunning, D.E.
1982-01-01
This report presents internal radiation dose conversion factors for radionuclides of interest in environmental assessments of nuclear fuel cycles. This volume provides an updated summary of estimates of committed dose equivalent for radionuclides considered in three previous Oak Ridge National Laboratory (ORNL) reports. Intakes by inhalation and ingestion are considered. The International Commission on Radiological Protection (ICRP) Task Group Lung Model has been used to simulate the deposition and retention of particulate matter in the respiratory tract. Results corresponding to activity median aerodynamic diameters (AMAD) of 0.3, 1.0, and 5.0 ..mu..m are given. The gastorintestinal (GI) tract has been represented by a four-segment catenary model with exponential transfer of radioactivity from one segment to the next. Retention of radionuclides in systemic organs is characterized by linear combinations of decaying exponential functions, recommended in ICRP Publication 30. The first-year annual dose rate, maximum annual dose rate, and fifty-year dose commitment per microcurie intake of each radionuclide is given for selected target organs and the effective dose equivalent. These estimates include contributions from specified source organs plus the systemic activity residing in the rest of the body; cross irradiation due to penetrating radiations has been incorporated into these estimates. 15 references.
Blind CP-OFDM and ZP-OFDM Parameter Estimation in Frequency Selective Channels
Vincent Le Nir
2009-01-01
Full Text Available A cognitive radio system needs accurate knowledge of the radio spectrum it operates in. Blind modulation recognition techniques have been proposed to discriminate between single-carrier and multicarrier modulations and to estimate their parameters. Some powerful techniques use autocorrelation- and cyclic autocorrelation-based features of the transmitted signal applying to OFDM signals using a Cyclic Prefix time guard interval (CP-OFDM. In this paper, we propose a blind parameter estimation technique based on a power autocorrelation feature applying to OFDM signals using a Zero Padding time guard interval (ZP-OFDM which in particular excludes the use of the autocorrelation- and cyclic autocorrelation-based techniques. The proposed technique leads to an efficient estimation of the symbol duration and zero padding duration in frequency selective channels, and is insensitive to receiver phase and frequency offsets. Simulation results are given for WiMAX and WiMedia signals using realistic Stanford University Interim (SUI and Ultra-Wideband (UWB IEEE 802.15.4a channel models, respectively.
Xu Zhifeng; Liang Pei; Yang Wei; Li Sisi; Cai Changchun
2014-01-01
Baozhu sand particles with size between 75 μm and 150 μm were coated by resin with the ratio of 1.5 wt.% of sands. Laser sintering experiments were carried out to investigate the effects of laser energy density (E =P/v), with different laser power (P) and scanning velocity (v), on the dimensional accuracy and tensile strength of sintered parts. The experimental results indicate that with the constant scanning velocity, the tensile strength of sintered samples increases with an increase in laser energy density; while the dimensional accuracy apparently decreases when the laser energy density is larger than 0.032 J·mm-2. When the laser energy density is 0.024 J·mm-2, the tensile strength shows no obvious change; but when the laser energy density is larger than 0.024 J·mm-2, the sample strength is featured by the initial increase and subsequent decrease with simultaneous increase of both laser power and scanning velocity. In this study, the optimal energy density range for laser sintering is 0.024-0.032 J·mm-2. Moreover, samples with the best tensile strength and dimensional accuracy can be obtained whenP = 30-40 W andv = 1.5-2.0 m·s-1. Using the optimized laser energy density, laser power and scanning speed, a complex coated sand mould with clear contour and excelent forming accuracy has been successfuly fabricated.
Samuel Pereira de Carvalho
1999-03-01
Full Text Available It was shown that the classic selection index, under multicollinearity, could not give simultaneous gains for wheat grain production and its primary components. This was due to the instability and, consequently, low precision of the coefficient index estimates. A modification of the prediction process of the index was proposed to avoid the adverse effects of multicollinearity, adopting a procedure based on ridge regression theory. The modified classic selection index, or ridge index, gave more statistically viable index coefficient estimates and gains for all of the characters evaluated. However, lower gains for number of grains per spike and grain yield were obtained, when compared to those obtained with selection for grain yield.Evidenciou-se a inviabilidade do uso do índice de seleção clássico, sob multicollinearidade, na obtenção de ganhos simultâneos para a produção de grãos de trigo e seus componentes primários. Esta inviabilidade foi devido à instabilidade e, consequentemente, pouca precisão das estimativas dos coeficientes do índice. A fim de contornar os efeitos adversos da multicollinearidade, propôs-se modificar o processo de predição do índice adotando-se um procedimento baseado na teoria de regressão em cristas. O índice de seleção clássico modificado proporcionou estimativas dos coeficientes do índice estatisticamente mais viáveis e ganhos em todos os caracteres avaliados. Contudo, com o uso deste índice, obtiveram-se ganhos inferiores para os caracteres número de grãos por espiga e rendimento de grãos, comparado aos obtidos pela seleção para rendimento.
T. Krauße
2012-02-01
Full Text Available The development of methods for estimating the parameters of hydrologic models considering uncertainties has been of high interest in hydrologic research over the last years. In particular methods which understand the estimation of hydrologic model parameters as a geometric search of a set of robust performing parameter vectors by application of the concept of data depth found growing research interest. Bárdossy and Singh (2008 presented a first Robust Parameter Estimation Method (ROPE and applied it for the calibration of a conceptual rainfall-runoff model with daily time step. The basic idea of this algorithm is to identify a set of model parameter vectors with high model performance called good parameters and subsequently generate a set of parameter vectors with high data depth with respect to the first set. Both steps are repeated iteratively until a stopping criterion is met. The results estimated in this case study show the high potential of the principle of data depth to be used for the estimation of hydrologic model parameters. In this paper we present some further developments that address the most important shortcomings of the original ROPE approach. We developed a stratified depth based sampling approach that improves the sampling from non-elliptic and multi-modal distributions. It provides a higher efficiency for the sampling of deep points in parameter spaces with higher dimensionality. Another modification addresses the problem of a too strong shrinking of the estimated set of robust parameter vectors that might lead to overfitting for model calibration with a small amount of calibration data. This contradicts the principle of robustness. Therefore, we suggest to split the available calibration data into two sets and use one set to control the overfitting. All modifications were implemented into a further developed ROPE approach that is called Advanced Robust Parameter Estimation (AROPE. However, in this approach the estimation of
Jung, Jee Woon; Her, Jin Gang; Ko, Jooyeon
2013-10-01
[Purpose] The purpose of this study was to investigate the effect of ankle plantarflexor strength training on selective voluntary motor control, gait parameters, and gross motor function of children with cerebral palsy (CP), focusing on changes in the strength and muscle activity of the ankle plantarflexors. [Methods] Six children aged between 4 and 10 years with CP participated in a 6 week strengthening program. The subjects were evaluated before and after the intervention in terms of ankle plantarflexor strength, muscle activity, gait velocity, cadence, step length, and D (standing) and E (walking, running, and jumping) dimensions of the Gross Motor Function Measure (GMFM). The data were analyzed using the non-parametric Wilcoxon signed-rank test. [Results] The strength of the plantarflexors increased in the majority of subjects. Significant and clinically meaningful post-intervention improvements in subject's gait velocity, cadence, and step length were found. [Conclusion] The controlled ankle plantarflexor strengthening program may lead to improvements in strength and spatiotemporal gait parameters of children with CP.
Luttrell, Karen M.; Tong, Xiaopeng; Sandwell, David T.; Brooks, Benjamin A.; Bevis, Michael G.
2011-11-01
The great 27 February 2010 Mw 8.8 earthquake off the coast of southern Chile ruptured a ˜600 km length of subduction zone. In this paper, we make two independent estimates of shear stress in the crust in the region of the Chile earthquake. First, we use a coseismic slip model constrained by geodetic observations from interferometric synthetic aperture radar (InSAR) and GPS to derive a spatially variable estimate of the change in static shear stress along the ruptured fault. Second, we use a static force balance model to constrain the crustal shear stress required to simultaneously support observed fore-arc topography and the stress orientation indicated by the earthquake focal mechanism. This includes the derivation of a semianalytic solution for the stress field exerted by surface and Moho topography loading the crust. We find that the deviatoric stress exerted by topography is minimized in the limit when the crust is considered an incompressible elastic solid, with a Poisson ratio of 0.5, and is independent of Young's modulus. This places a strict lower bound on the critical stress state maintained by the crust supporting plastically deformed accretionary wedge topography. We estimate the coseismic shear stress change from the Maule event ranged from -6 MPa (stress increase) to 17 MPa (stress drop), with a maximum depth-averaged crustal shear-stress drop of 4 MPa. We separately estimate that the plate-driving forces acting in the region, regardless of their exact mechanism, must contribute at least 27 MPa trench-perpendicular compression and 15 MPa trench-parallel compression. This corresponds to a depth-averaged shear stress of at least 7 MPa. The comparable magnitude of these two independent shear stress estimates is consistent with the interpretation that the section of the megathrust fault ruptured in the Maule earthquake is weak, with the seismic cycle relieving much of the total sustained shear stress in the crust.
Barwick, S A; Tier, B; Swan, A A; Henzell, A L
2013-10-01
Procedures are described for estimating selection index accuracies for individual animals and expected genetic change from selection for the general case where indexes of EBVs predict an aggregate breeding objective of traits that may or may not have been measured. Index accuracies for the breeding objective are shown to take an important general form, being able to be expressed as the product of the accuracy of the index function of true breeding values and the accuracy with which that function predicts the breeding objective. When the accuracies of the individual EBVs of the index are known, prediction error variances (PEVs) and covariances (PECs) for the EBVs within animal are able to be well approximated, and index accuracies and expected genetic change from selection estimated with high accuracy. The procedures are suited to routine use in estimating index accuracies in genetic evaluation, and for providing important information, without additional modelling, on the directions in which a population will move under selection.
Nikoloulopoulos, Aristidis K
2016-06-30
The method of generalized estimating equations (GEE) is popular in the biostatistics literature for analyzing longitudinal binary and count data. It assumes a generalized linear model for the outcome variable, and a working correlation among repeated measurements. In this paper, we introduce a viable competitor: the weighted scores method for generalized linear model margins. We weight the univariate score equations using a working discretized multivariate normal model that is a proper multivariate model. Because the weighted scores method is a parametric method based on likelihood, we propose composite likelihood information criteria as an intermediate step for model selection. The same criteria can be used for both correlation structure and variable selection. Simulations studies and the application example show that our method outperforms other existing model selection methods in GEE. From the example, it can be seen that our methods not only improve on GEE in terms of interpretability and efficiency but also can change the inferential conclusions with respect to GEE. Copyright © 2016 John Wiley & Sons, Ltd.
Neural classifier in the estimation process of maturity of selected varieties of apples
Boniecki, P.; Piekarska-Boniecka, H.; Koszela, K.; Zaborowicz, M.; Przybył, K.; Wojcieszak, D.; Zbytek, Z.; Ludwiczak, A.; Przybylak, A.; Lewicki, A.
2015-07-01
This paper seeks to present methods of neural image analysis aimed at estimating the maturity state of selected varieties of apples which are popular in Poland. An identification of the degree of maturity of selected varieties of apples has been conducted on the basis of information encoded in graphical form, presented in the digital photos. The above process involves the application of the BBCH scale, used to determine the maturity of apples. The aforementioned scale is widely used in the EU and has been developed for many species of monocotyledonous plants and dicotyledonous plants. It is also worth noticing that the given scale enables detailed determinations of development stage of a given plant. The purpose of this work is to identify maturity level of selected varieties of apples, which is supported by the use of image analysis methods and classification techniques represented by artificial neural networks. The analysis of graphical representative features based on image analysis method enabled the assessment of the maturity of apples. For the utilitarian purpose the "JabVis 1.1" neural IT system was created, in accordance with requirements of the software engineering dedicated to support the decision-making processes occurring in broadly understood production process and processing of apples.
Froelich, K.J.; Fitzpatrick, C.M.
1976-12-01
The adhesives included several epoxy resins, a varnish, and a B-stage glass cloth (a partially cured resin in a fiberglass cloth matrix). Several parameters critical to bond strength were varied: adhesive and adherend differences, surface preparation, coupling agents, glass cloth, epoxy thickness, fillers, and bonding pressure and temperature. The highest lap shear strengths were obtained with the B-shear glass cloth at both liquid nitrogen and room temperatures with values of approximately 20 MPa (3000 psi) and approximately 25.5 MPa (3700 psi) respectively.
Chung, Sung-Kuang; Tseng, Yong-Ren; Chen, Chan-Yu; Sun, Shih-Sheng
2011-04-04
A series of platinum(II) terpyridine complexes featuring an aminostilbene donor-acceptor framework was synthesized. The complex with a dithiaazacrown moiety exhibits a highly sensitive and selective colorimetric response to a Hg(2+) cation through modulation of the relative strength of ICT and MLCT transitions. The results from (1)H NMR titration suggest the existence of a weak Pt(II)···Hg(II) metallophilic interaction at low Hg(2+) concentration.
Kawakami, H
2003-01-01
On 100 isobars from 72 to 171 mass number, the radiation strength, dose equivalent and mean gamma-ray energy from p+ sup 2 sup 3 sup 8 U fission products at Tandem accelerator facility were estimated on the basis of data of proton induced fission mass yield by T. Tsukada. In order to control radiation, the decay curves of radiation of each mass after irradiation were estimated and illustrated. These calculation results showed 1) the peak of p+ sup 2 sup 3 sup 8 U fission products is 101 and 133 mass number. 2) gamma-ray strength of target ion source immediately after irradiation is 3.12x10 sup 1 sup 1 (Radiation/s) when it repeated 4 cycles of UC sub 2 (2.6 g/cm sup 2) target radiated by 30 MeV and 3 mu A proton for 5 days and then cooled for 2 days. It decreased to 3.85x10 sup 1 sup 0 and 6.7x10 sup 9 (Radiation/s) after one day and two weeks cooling, respectively. 3) Total dose equivalent is 3.8x10 sup 4 (mu S/h) at 1 m distance without shield. 4) There are no problems on control the following isobars, beca...
T. Yin
2014-10-01
Full Text Available In this paper the so-called Theory of Critical Distances is reformulated to make it suitable for estimating the strength of notched metals subjected to dynamic loading. The TCD takes as its starting point the assumption that engineering materials’ strength can accurately be predicted by directly post-processing the entire linear-elastic stress field acting on the material in the vicinity of the stress concentrator being assessed. In order to extend the used of the TCD to situations involving dynamic loading, the hypothesis is formed that the required critical distance (which is treated as a material property varies as the loading rate increases. The accuracy and reliability of this novel reformulation of the TCD was checked against a number of experimental results generated by testing notched cylindrical bars of Al6063-T5. This validation exercise allowed us to prove that the TCD (applied in the form of the Point, Line, and Area Method is capable of estimates falling within an error interval of ±20%. This result is very promising especially in light of the fact that such a design method can be used in situations of practical interest without the need for explicitly modelling the non-linear stress vs. strain dynamic behaviour of metals.
T. Krauße
2011-03-01
Full Text Available The development of methods for estimating the parameters of hydrological models considering uncertainties has been of high interest in hydrological research over the last years. In particular methods which understand the estimation of hydrological model parameters as a geometric search of a set of robust performing parameter vectors by application of the concept of data depth found growing research interest. Bárdossy and Singh (2008 presented a first proposal and applied it for the calibration of a conceptual rainfall-runoff model with daily time step. Krauße and Cullmann (2011 further developed this method and applied it in a case study to calibrate a process oriented hydrological model with hourly time step focussing on flood events in a fast responding catchment. The results of both studies showed the potential of the application of the principle of data depth. However, also the weak point of the presented approach got obvious. The algorithm identifies a set of model parameter vectors with high model performance and subsequently generates a set of parameter vectors with high data depth with respect to the first set. These both steps are repeated iteratively until a stopping criterion is met. In the first step the estimation of the good parameter vectors is based on the Monte Carlo method. The major shortcoming of this method is that it is strongly dependent on a high number of samples exponentially growing with the dimensionality of the problem. In this paper we present another robust parameter estimation strategy which applies an approved search strategy for high-dimensional parameter spaces, the particle swarm optimisation in order to identify a set of good parameter vectors with given uncertainty bounds. The generation of deep parameters is according to Krauße and Cullmann (2011. The method was compared to the Monte Carlo based robust parameter estimation algorithm on the example of a case study in Krauße and Cullmann (2011 to
Chang Liu; Hong Liu; Yue-Tong Qian; Song Zhu; Su-Qian Zhao
2014-01-01
In this study, we evaluate the influence of post surface pre-treatments on the bond strength of four different cements to glass fiber posts. Eighty extracted human maxillary central incisors and canines were endodontically treated and standardized post spaces were prepared. Four post pre-treatments were tested:(i) no pre-treatment (NS, control), (ii) sandblasting (SA), (iii) silanization (SI) and (iv) sandblasting followed by silanization (SS). Per pre-treatment, four dual-cure resin cements were used for luting posts:DMG LUXACORE Smartmix Dual, Multilink Automix, RelyX Unicem and Panavia F2.0. All the specimens were subjected to micro push-out test. Two-way analysis of variance and Tukey post hoc tests were performed (a50.05) to analyze the data. Bond strength was significantly affected by the type of resin cement, and bond strengths of RelyX Unicem and Panavia F2.0 to the fiber posts were significantly higher than the other cement groups. Sandblasting significantly increased the bond strength of DMG group to the fiber posts.
Mleczek, Mirosław; Magdziak, Zuzanna; Kaczmarek, Zygmunt; Golinski, Piotr
2010-09-01
Determination of interactions between selected heavy metals (Cd, Co, Cr, Cu, Ni, Pb and Zn) in their phytoremediation by one-year-old cuttings of Salix viminalis 'Cannabina' was the purpose of this work. The achieved results indicate that Salix cuttings may successfully be used in phytoremediation of polluted soil and/or sewage not only with one metal at high concentrations but also in different combinations with the other metals. Under controlled conditions (the hydroponic experiment) new interactions were found as well as known data concerning interactions between-presented in the matrix-heavy metals, depending on whether their concentration and composition were confirmed. The results showed that the ratio of metal concentration can change the interaction intensity. The achieved results enable one to indirectly estimate the accumulation efficiency of dominating metals as well as accompanying ones at lower concentrations.
Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model
Kirkegaard, Poul Henning
1993-01-01
Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...... in a Fisherian sense, is given. The solution is investigated by a simulation study. It is shown that if the experimental length T1 is fixed it may be useful to sample the record at a high sampling rate, since more measurements from the system are then collected. No optimal sampling interval exists....... But if the total number of sample points N is fixed an optimal sampling interval exists. Then it is far worse to use a too large sampling interval than a too small one since the information losses increase rapidly when the sampling interval increases from the optimal value....
Kumar, Sudhir [Radiological Physics and Advisory Division, Bhabha Atomic Research Centre, CTCRS, Anushaktinagar, Mumbai 400094 (India); Srinivasan, P. [Radiation Safety Systems Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400085 (India); Sharma, S.D., E-mail: sdsharma_barc@rediffmail.com [Radiological Physics and Advisory Division, Bhabha Atomic Research Centre, CTCRS, Anushaktinagar, Mumbai 400094 (India); Mayya, Y.S. [Radiological Physics and Advisory Division, Bhabha Atomic Research Centre, CTCRS, Anushaktinagar, Mumbai 400094 (India)
2012-01-15
Measuring the strength of high dose rate (HDR) {sup 192}Ir brachytherapy sources on receipt from the vendor is an important component of a quality assurance program. Owing to their ready availability in radiotherapy departments, the Farmer-type ionization chambers are also used to determine the strength of HDR {sup 192}Ir brachytherapy sources. The use of a Farmer-type ionization chamber requires the estimation of the scatter correction factor along with positioning error (c) and the constant of proportionality (f) to determine the strength of HDR {sup 192}Ir brachytherapy sources. A simplified approach based on a least squares method was developed for estimating the values of f and M{sub s}. The seven distance method was followed to record the ionization chamber readings for parameterization of f and M{sub s}. Analytically calculated values of M{sub s} were used to determine the room scatter correction factor (K{sub sc}). The Monte Carlo simulations were also carried out to calculate f and K{sub sc} to verify the magnitude of the parameters determined by the proposed analytical approach. The value of f determined using the simplified analytical approach was found to be in excellent agreement with the Monte Carlo simulated value (within 0.7%). Analytically derived values of K{sub sc} were also found to be in good agreement with the Monte Carlo calculated values (within 1.47%). Being far simpler than the presently available methods of evaluating f, the proposed analytical approach can be adopted for routine use by clinical medical physicists to estimate f by hand calculations. - Highlights: Black-Right-Pointing-Pointer RAKR measurement of a brachytherapy source by 7 distance method requires the evaluation of 'f'. Black-Right-Pointing-Pointer A simplified analytical approach based on least square method to evaluate 'f' and 'M{sub s}' was developed. Black-Right-Pointing-Pointer Parameter 'f' calculated by proposed analytical
Duckstein, L.; Bobée, B.; Ashkar, F.
1991-09-01
The problem of fitting a probability distribution, here log-Pearson Type III distribution, to extreme floods is considered from the point of view of two numerical and three non-numerical criteria. The six techniques of fitting considered include classical techniques (maximum likelihood, moments of logarithms of flows) and new methods such as mixed moments and the generalized method of moments developed by two of the co-authors. The latter method consists of fitting the distribution using moments of different order, in particular the SAM method (Sundry Averages Method) uses the moments of order 0 (geometric mean), 1 (arithmetic mean), -1 (harmonic mean) and leads to a smaller variance of the parameters. The criteria used to select the method of parameter estimation are: - the two statistical criteria of mean square error and bias; - the two computational criteria of program availability and ease of use; - the user-related criterion of acceptability. These criteria are transformed into value functions or fuzzy set membership functions and then three Multiple Criteria Decision Modelling (MCDM) techniques, namely, composite programming, ELECTRE, and MCQA, are applied to rank the estimation techniques.
Leandro R. Monteiro
2005-01-01
Full Text Available In this study, we used a combination of geometric morphometric and evolutionary genetics methods for the inference of possible mechanisms of evolutionary divergence. A sensitivity analysis for the constant-heritability rate test results regarding variation in genetic and demographic parameters was performed, in order to assess the relative influence of uncertainty of parameter estimation on the robustness of test results. As an application, we present a study on body shape variation among populations of the poeciliine fish Poecilia vivipara inhabiting lagoons of the quaternary plains in northern Rio de Janeiro State, Brazil. The sensitivity analysis showed that, in general, the most important parameters are heritability, effective population size and number of generations since divergence. For this specific example, using a conservatively wide range of parameters, the neutral model of genetic drift could not be accepted as a sole cause for the observed magnitude of morphological divergence among populations. A mechanism of directional selection is suggested as the main cause of variation among populations in different habitats and lagoons. The implications of parameter estimation and biological assumptions and consequences are discussed.
HOW WELL DO SELECTION MODELS PERFORM? ASSESSING THE ACURACY OF ART AUCTION PRE-SALE ESTIMATES.
Yu, Binbing; Gastwirth, Joseph L
2010-04-01
Art auction catalogs provide a pre-sale prediction interval for the price each item is expected to fetch. When the owner consigns art work to the auction house, a reserve price is agreed upon, which is not announced to the bidders. If the highest bid does not reach it, the item is brought in. Since only the prices of the sold items are published, analysts only have a biased sample to examine due to the selective sale process. Relying on the published data leads to underestimating the forecast error of the pre-sale estimates. However, we were able to obtain several art auction catalogs with the highest bids for the unsold items as well as those of the sold items. With these data we were able to evaluate the accuracy of the predictions of the sale prices or highest bids for all item obtained from the original Heckman selection model that assumed normal error distributions as well as those derived from an alternative model using the t(2) distribution, which yielded a noticeably better fit to several sets of auction data. The measures of prediction accuracy are of more than academic interest as they are used by auction participants to guide their bidding or selling strategy, and similar appraisals are accepted by the US Internal Revenue Services to justify the deductions for charitable contributions donors make on their tax returns.
Asrul Adam
2014-01-01
Full Text Available Electroencephalogram (EEG signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1 standard PSO and (2 random asynchronous particle swarm optimization (RA-PSO. The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.
Soller, Eric C.; Hoffman, Grant T.; Heintzelman, Douglas L.; Duffy, Mark T.; Bloom, Jeffrey N.; McNally-Heintzelman, Karen M.
2004-07-01
An ex vivo study was conducted to determine the effect of the irregularity of the scaffold surface on the tensile strength of repairs formed using our Scaffold-Enhanced Biological Adhesive (SEBA). Two different scaffold materials were investigated: (i) a synthetic biodegradable material fabricated from poly(L-lactic-co-glycolic acid); and (ii) a biological material, small intestinal submucosa, manufactured by Cook BioTech. The scaffolds were doped with protein solder composed of 50%(w/v) bovine serum albumin solder and 0.5mg/ml indocyanine green dye mixed in deionized water, and activated with an 808-nm diode laser. The tensile strength of repairs performed on bovine thoracic aorta, liver, spleen, small intestine and lung, using the smooth and irregular surfaces of the above scaffold-enhanced materials were measured and the time-to-failure was recorded. The tensile strength of repairs formed using the irregular surfaces of the scaffolds were consistently higher than those formed using the smooth surfaces of the scaffolds. The largest difference was observed on repairs formed on the aorta and small intestine, where the repairs were, on average, 50% stronger using the irregular versus the smooth scaffold surfaces. In addition, the time-to-failure of repairs formed using the irregular surfaces of the scaffolds were between 50% and 100% longer than that achieved using the smooth surfaces of the scaffolds. It has previously been shown that distributing or dispersing the adhesive forces over the increased surface area of the scaffold, either smooth or irregular, produces stronger repairs than albumin solder alone. The increase in the absolute strength and longevity of repairs seen in this new study when the irregular surfaces of the scaffolds are used is thought to be due to the distribution of forces between the many independent micro-adhesions provided by the irregular surfaces.
1982-02-01
Ergonomics Guide for the Assessment of Human Static Strength. Am. Ind. Hyg. Assn. J. 36: 505-511, 1975. 5. E. ASMUSSEN. Measurement of Muscular...and Shoulder Muscles. Ergonomics . 23(0): 37-47, 1980. 9. E KAMON. Personal communication, 1981. 10. B.J. WINER. Statistical Prinoiples in...adjustable belts, 5.0 centimeters in width. A stool * and a removable elbow rest, which attached to the wood support, were used in some of the tests. 2. The
Over, Thomas; Saito, Riki J.; Veilleux, Andrea; Sharpe, Jennifer B.; Soong, David T.; Ishii, Audrey
2016-06-28
ungaged sites and to improve flood-quantile estimates at and near a gaged site; (2) the urbanization-adjusted annual maximum peak discharges and peak discharge quantile estimates at streamgages from 181 watersheds including the 117 study watersheds and 64 additional watersheds in the study region that were originally considered for use in the study but later deemed to be redundant.The urbanization-adjustment equations, spatial regression equations, and peak discharge quantile estimates developed in this study will be made available in the Web application StreamStats, which provides automated regression-equation solutions for user-selected stream locations. Figures and tables comparing the observed and urbanization-adjusted annual maximum peak discharge records by streamgage are provided at http://dx.doi.org/10.3133/sir20165050 for download.
José Claudio Isaias
2015-01-01
Full Text Available In the selecting of stock portfolios, one type of analysis that has shown good results is Data Envelopment Analysis (DEA. It, however, has been shown to have gaps regarding its estimates of monthly time horizons of data collection for the selection of stock portfolios and of monthly time horizons for the maintenance of a selected portfolio. To better estimate these horizons, this study proposes a model of mathematical programming binary of minimization of square errors. This model is the paper’s main contribution. The model’s results are validated by simulating the estimated annual return indexes of a portfolio that uses both horizons estimated and of other portfolios that do not use these horizons. The simulation shows that portfolios with both horizons estimated have higher indexes, on average 6.99% per year. The hypothesis tests confirm the statistically significant superiority of the results of the proposed mathematical model’s indexes. The model’s indexes are also compared with portfolios that use just one of the horizons estimated; here the indexes of the dual-horizon portfolios outperform the single-horizon portfolios, though with a decrease in percentage of statistically significant superiority.
Waxman, D.
2011-01-01
The fixation probability is determined when population size and selection change over time and differs from Kimura’s result, with long-term implications for a population. It is found that changes in population size are not equivalent to the corresponding changes in selection and can result in less drift than anticipated.
Park Jinho
2012-06-01
Full Text Available Abstract Background Myocardial ischemia can be developed into more serious diseases. Early Detection of the ischemic syndrome in electrocardiogram (ECG more accurately and automatically can prevent it from developing into a catastrophic disease. To this end, we propose a new method, which employs wavelets and simple feature selection. Methods For training and testing, the European ST-T database is used, which is comprised of 367 ischemic ST episodes in 90 records. We first remove baseline wandering, and detect time positions of QRS complexes by a method based on the discrete wavelet transform. Next, for each heart beat, we extract three features which can be used for differentiating ST episodes from normal: 1 the area between QRS offset and T-peak points, 2 the normalized and signed sum from QRS offset to effective zero voltage point, and 3 the slope from QRS onset to offset point. We average the feature values for successive five beats to reduce effects of outliers. Finally we apply classifiers to those features. Results We evaluated the algorithm by kernel density estimation (KDE and support vector machine (SVM methods. Sensitivity and specificity for KDE were 0.939 and 0.912, respectively. The KDE classifier detects 349 ischemic ST episodes out of total 367 ST episodes. Sensitivity and specificity of SVM were 0.941 and 0.923, respectively. The SVM classifier detects 355 ischemic ST episodes. Conclusions We proposed a new method for detecting ischemia in ECG. It contains signal processing techniques of removing baseline wandering and detecting time positions of QRS complexes by discrete wavelet transform, and feature extraction from morphology of ECG waveforms explicitly. It was shown that the number of selected features were sufficient to discriminate ischemic ST episodes from the normal ones. We also showed how the proposed KDE classifier can automatically select kernel bandwidths, meaning that the algorithm does not require any numerical
Vibha Singhal
Full Text Available Irisin and FGF21 are novel hormones implicated in the "browning" of white fat, thermogenesis, and energy homeostasis. However, there are no data regarding these hormones in amenorrheic athletes (AA (a chronic energy deficit state compared with eumenorrheic athletes (EA and non-athletes. We hypothesized that irisin and FGF21 would be low in AA, an adaptive response to low energy stores. Furthermore, because (i brown fat has positive effects on bone, and (ii irisin and FGF21 may directly impact bone, we hypothesized that bone density, structure and strength would be positively associated with these hormones in athletes and non-athletes. To test our hypotheses, we studied 85 females, 14-21 years [38 AA, 24 EA and 23 non-athletes (NA]. Fasting serum irisin and FGF21 were measured. Body composition and bone density were assessed using dual energy X-ray absorptiometry, bone microarchitecture using high resolution peripheral quantitative CT, strength estimates using finite element analysis, resting energy expenditure (REE using indirect calorimetry and time spent exercising/week by history. Subjects did not differ for pubertal stage. Fat mass was lowest in AA. AA had lower irisin and FGF21 than EA and NA, even after controlling for fat and lean mass. Across subjects, irisin was positively associated with REE and bone density Z-scores, volumetric bone mineral density (total and trabecular, stiffness and failure load. FGF21 was negatively associated with hours/week of exercise and cortical porosity, and positively with fat mass and cortical volumetric bone density. Associations of irisin (but not FGF21 with bone parameters persisted after controlling for potential confounders. In conclusion, irisin and FGF21 are low in AA, and irisin (but not FGF21 is independently associated with bone density and strength in athletes.
Kaneko, K
1998-01-01
Strength of attractor is studied by the return rate to itself after perturbations, for a multi-attractor state of a globally coupled map. It is found that fragile (Milnor) attractors have a large basin volume at the partially ordered phase. Such dominance of fragile attractors is understood by robustness of global attraction in the phase space. Change of the attractor strength and basin volume against the parameter and size are studied. In the partially ordered phase, the dynamics is often described as Milnor attractor network, which leads to a new interpretation of chaotic itinerancy. Noise-induced selection of fragile attractors is found that has a sharp dependence on the noise amplitude. Relevance of the observed results to neural dynamics and cell differentiation is also discussed.
Henderson, Donald M; Nicholls, Robert
2015-08-01
Motivated by the work of palaeo-art "Double Death (2011)," a biomechanical analysis using three-dimensional digital models was conducted to assess the potential of a pair of the large, Late Cretaceous theropod dinosaur Carcharodontosaurus saharicus to successfully lift a medium-sized sauropod and not lose balance. Limaysaurus tessonei from the Late Cretaceous of South America was chosen as the sauropod as it is more completely known, but closely related to the rebbachisaurid sauropods found in the same deposits with C. saharicus. The body models incorporate the details of the low-density regions associated with lungs, systems of air sacs, and pneumatized axial skeletal regions. These details, along with the surface meshes of the models, were used to estimate the body masses and centers of mass of the two animals. It was found that a 6 t C. saharicus could successfully lift a mass of 2.5 t and not lose balance as the combined center of mass of the body and the load in the jaws would still be over the feet. However, the neck muscles were found to only be capable of producing enough force to hold up the head with an added mass of 424 kg held at the midpoint of the maxillary tooth row. The jaw adductor muscles were more powerful, and could have held a load of 512 kg. The more limiting neck constraint leads to the conclusion that two, adult C. saharicus could successfully lift a L. tessonei with a maximum body mass of 850 kg and a body length of 8.3 m.
Ingle, Lee; Sleap, Mike; Tolfrey, Keith
2006-09-01
Complex training, a combination of resistance training and plyometrics is growing in popularity, despite limited support for its efficacy. In pre- and early pubertal children, the study of complex training has been limited, and to our knowledge an examination of its effect on anaerobic performance characteristics of the upper and lower body has not been undertaken. Furthermore, the effect of detraining after complex training requires clarification. The physical characteristics (mean+/-s) of the 54 male participants in the present study were as follows: age 12.3 +/- 0.3 years, height 1.57 +/- 0.07 m, body mass 50.3 +/- 11.0 kg. Participants were randomly assigned to an experimental (n = 33) or control group (n = 21). The training, which was performed three times a week for 12 weeks, included a combination of dynamic constant external resistance and plyometrics. After training, participants completed 12 weeks of detraining. At baseline, after training and after detraining, peak and mean anaerobic power, dynamic strength and athletic performance were assessed. Twenty-six participants completed the training and none reported any training-related injury. Complex training was associated with small increases ( 0.05). In the experimental group, dynamic strength was increased by 24.3 - 71.4% (dependent on muscle group; P 0.05). For 40-m sprint running, basketball chest pass and vertical jump test performance, the experimental group saw a small improvement ( 0.05). In conclusion, in pre- and early pubertal boys, upper and lower body complex training is a time-effective and safe training modality that confers small improvements in anaerobic power and jumping, throwing and sprinting performance, and marked improvements in dynamic strength. However, after detraining, the benefits of complex training are lost at similar rates to other training modalities.
1983-10-01
height) concrete gravity dams, which is approximately 10 per- cent of the total number of major dams in the world . Prior to 1900, the only stability...Characteristics of Minerals," Geotechnique, Vol 12, p 319. International Commission on Large Dams. 1973. " World Register of Dams," Paris. International Society...gneiss Very high strength >30000 Quartzite, dolerite, gabbro , basalt -%’., 4-. .*4’. k. k*"* .a . - - - 4 - o . - . 04 44 44 44 3 5.4 > 44 E4 4 - 0 4- 0
Amir Ghanifar
2016-06-01
Full Text Available Introduction The concentration of substances, including urea, creatinine, and uric acid, can be used as an index to measure toxic uremic solutes in the blood during dialysis and interdialytic intervals. The on-line monitoring of toxin concentration allows for the clearance measurement of some low-molecular-weight solutes at any time during hemodialysis.The aim of this study was to determine the optimal wavelength for estimating the changes in urea, creatinine, and uric acid in dialysate, using ultraviolet (UV spectroscopy. Materials and Methods In this study, nine uremic patients were investigated, using on-line spectrophotometry. The on-line absorption measurements (UV radiation were performed with a spectrophotometer module, connected to the fluid outlet of the dialysis machine. Dialysate samples were obtained and analyzed, using standard biochemical methods. Optimal wavelengths for both creatinine and uric acid were selected by using a combination of genetic algorithms (GAs, i.e., GA-partial least squares (GA-PLS and interval partial least squares (iPLS. Results The Artifitial Neural Network (ANN sensitivity analysis determined the wavelengths of the UV band most suitable for estimating the concentration of creatinine and uric acid. The two optimal wavelengths were 242 and 252 nm for creatinine and 295 and 298 nm for uric acid. Conclusion It can be concluded that the reduction ratio of creatinine and uric acid (dialysis efficiency could be continuously monitored during hemodialysis by UV spectroscopy.Compared to the conventional method, which is particularly sensitive to the sampling technique and involves post-dialysis blood sampling, iterative measurements throughout the dialysis session can yield more reliable data.
Hashim, Roslan; Roy, Chandrabhushan; Motamedi, Shervin; Shamshirband, Shahaboddin; Petković, Dalibor; Gocic, Milan; Lee, Siew Cheng
2016-05-01
Rainfall is a complex atmospheric process that varies over time and space. Researchers have used various empirical and numerical methods to enhance estimation of rainfall intensity. We developed a novel prediction model in this study, with the emphasis on accuracy to identify the most significant meteorological parameters having effect on rainfall. For this, we used five input parameters: wet day frequency (dwet), vapor pressure (e̅a), and maximum and minimum air temperatures (Tmax and Tmin) as well as cloud cover (cc). The data were obtained from the Indian Meteorological Department for the Patna city, Bihar, India. Further, a type of soft-computing method, known as the adaptive-neuro-fuzzy inference system (ANFIS), was applied to the available data. In this respect, the observation data from 1901 to 2000 were employed for testing, validating, and estimating monthly rainfall via the simulated model. In addition, the ANFIS process for variable selection was implemented to detect the predominant variables affecting the rainfall prediction. Finally, the performance of the model was compared to other soft-computing approaches, including the artificial neural network (ANN), support vector machine (SVM), extreme learning machine (ELM), and genetic programming (GP). The results revealed that ANN, ELM, ANFIS, SVM, and GP had R2 of 0.9531, 0.9572, 0.9764, 0.9525, and 0.9526, respectively. Therefore, we conclude that the ANFIS is the best method among all to predict monthly rainfall. Moreover, dwet was found to be the most influential parameter for rainfall prediction, and the best predictor of accuracy. This study also identified sets of two and three meteorological parameters that show the best predictions.
Baker, Daniel G
2017-06-01
A number of studies have established that higher levels of strength and power, tested at the end of the preseason, distinguish between playing level in professional rugby league. How this may impact the ability of players to get selected for final payoff games some 30 weeks later has not been fully investigated. The purpose of this study was to compare measures of upper- and lower-body strength between players from the same professional club, designated as those 17 players who attained selection and played in the team that won the Grand Final of the National Rugby League competition (GF) and those who did not attain selection (NSGF). Players were tested and compared for 1 repetition maximum bench press and full squat strength levels at the end of the preparation period, 30 weeks before the GF, using traditional significance analysis of variance and effect size (ES) statistics. Furthermore, the players were analyzed according to the 2 broad positional playing groups of forwards (FWD) and backs (BL). The results demonstrated that overall, the GF players were stronger than NSGF players by approximately 10 and 15%, respectively, for the upper and lower body. When analyzed according to positional groupings, there were significant differences and large ES for GF forwards, who were significantly stronger, heavier, and older than NSGF FWD players. For the BL groups, the differences between the groups were not significant. Because of the intense physical collisions inherent in rugby league, it would appear that higher levels of strength afford players greater performance benefits, resiliency against injury, and greater likelihood of being selected in the most important games at the end of the season.
2016-08-18
A number of studies have established that higher levels of strength and power, tested at the end of the pre-season, distinguish between playing level in professional rugby league. How this may impact the ability of players to get selected for final pay-off games some 30-wks later has not been fully investigated. The purpose of this study was to compare measures of upper and lower body strength between players from the same professional club, designated as those seventeen players who attained selection and played in the team that won the Grand Final of the National Rugby League (NRL) competition (GF) and those who did not attain selection (NSGF). Players were tested and compared for One Repetition Maximum (1RM) Bench Press (BP) and Full Squat (SQ) strength levels at the end of the Preparation period, 30-weeks prior to the GF, using traditional significance ANOVA and Effect Size (ES) statistics. Furthermore, the players were analyzed according to the two broad positional playing groups of Forwards (FWD) and Backs (BL). The results demonstrated that overall, the GF players were stronger than NSGF players by about 10% and 15%, respectively for the upper and lower body. When analyzed according to positional groupings there were significant differences and Large ES for GF forwards, who were significantly stronger, heavier and older than NSGF FWD players. For the BL groups, the differences between the groups were not significant. Due to the intense physical collisions inherent in rugby league, it would appear that higher levels of strength afford players greater performance benefits, resiliency against injury and greater likelihood of being selected in the most important games at the end of the season.
Estimation of sediment sources using selected chemical tracers in the Perry lake basin,Kansas,USA
K.E.JURACEK; A.C.ZIEGLER
2009-01-01
The ability to achieve meaningful decreases in sediment loads to reservoirs requires a determination of the relative importance of sediment sources within the contributing basins.In an investigation of sources of fine-grained sediment (clay and silt) within the Perry Lake Basin in northeast Kansas,representative samples of channel-bank sources,surface-soil sources (cropland and grassland),and reservoir bottom sediment were collected,chemically analyzed,and compared.The samples were sieved to isolate the ＜63μ m fraction and analyzed for selected nutrients (total nitrogen and total phosphorus),organic and total carbon,25 trace elements,and the radionuclide cesium-137 (137Cs).On the basis of substantial and consistent compositional differences among the source types,total nitrogen (TN),total phosphorus (TP),total organic carbon (TOC),and 137Cs were selected for use in the estimation of sediment sources.To further account for differences in particle-size composition between the sources and the reservoir bottom sediment,constituent ratio and clay-normalization techniques were used.Computed ratios included TOC to TN,TOC to TP,and TN to TP.Constituent concentrations (TN,TP,TOC) and activities (137Cs) were normalized by dividing by the percentage of clay.Thus,the sediment-source estimations involved the use of seven sediment-source indicators.Within the Perry Lake Basin,the consensus of the seven indicators was that both channel-bank and surface-soil sources were important in the Atchison County Lake and Banner Creek Reservoir subbasins,whereas channel-bank sources were dominant in the Mission Lake subbasin.On the sole basis of 137Cs activity,surface-soil sources contributed the most fine-grained sediment to Atchison County Lake,and channel-bank sources contributed the most fine-grained sediment to Banner Creek Reservoir and Mission Lake.Both the seven-indicator consensus and 137Cs indicated that channelbank sources were dominant for Perry Lake and that channel
The Influence of Study Species Selection on Estimates of Pesticide Exposure in Free-Ranging Birds
Borges, Shannon L.; Vyas, Nimish B.; Christman, Mary C.
2014-02-01
Field studies of pesticide effects on birds often utilize indicator species with the purpose of extrapolating to other avian taxa. Little guidance exists for choosing indicator species to monitor the presence and/or effects of contaminants that are labile in the environment or body, but are acutely toxic, such as anticholinesterase (anti-ChE) insecticides. Use of an indicator species that does not represent maximum exposure and/or effects could lead to inaccurate risk estimates. Our objective was to test the relevance of a priori selection of indicator species for a study on pesticide exposure to birds inhabiting fruit orchards. We used total plasma ChE activity and ChE reactivation to describe the variability in anti-ChE pesticide exposure among avian species in two conventionally managed fruit orchards. Of seven species included in statistical analyses, the less common species, chipping sparrow ( Spizella passerina), showed the greatest percentage of exposed individuals and the greatest ChE depression, whereas the two most common species, American robins ( Turdus migratorius) and gray catbirds ( Dumatella carolinensis), did not show significant exposure. Due to their lower abundance, chipping sparrows would have been an unlikely choice for study. Our results show that selection of indicator species using traditionally accepted criteria such as abundance and ease of collection may not identify species that are at greatest risk. Our efforts also demonstrate the usefulness of conducting multiple-species pilot studies prior to initiating detailed studies on pesticide effects. A study such as ours can help focus research and resources on study species that are most appropriate.
Maćkała Krzysztof
2015-03-01
Full Text Available Introduction. Distance running performance is a simple function of developing high speeds and maintaining this speed as long as possible. Thus a correct running technique becomes an important component of performance. Technique is effective if the competitor can reach a better performance result with the same or lower energy consumption. The purpose of this investigation was to examine a six weeks application explosive type strength training on lower extremities power and maximum speed performance improvement in order to facilitate running technique in sub-elite male middle-distance runner. Material and methods. A sub-elite runner performed twice a week special exercises and running drills. He completed a pre and post-training jumping (SJ, CMJ, standing long jump, standing five jump and speed (20 m from standing and flying start field tests. For kinematical analysis, a video (SIMI Motion System of a 10 m sprint from a 20 m flying start was collected. Results. Improvement occurred in all measurements but strong changes were evident in the 10 m from 20 flying start and in stride frequency from 3.90 to 4.01 Hz, due to decreasing of ground contact time from 160 to 156 ms. No strong evidence in the participant's running technique changes. Conclusion. This proved that six weeks of dynamic type strength program seems to improve neuromuscular characteristics of running speed and explosive power and no changes in running technique.
Visser, Marcel E; Gienapp, Phillip; Husby, Arild; Morrisey, Michael; de la Hera, Iván; Pulido, Francisco; Both, Christiaan
2015-04-01
Climate change has differentially affected the timing of seasonal events for interacting trophic levels, and this has often led to increased selection on seasonal timing. Yet, the environmental variables driving this selection have rarely been identified, limiting our ability to predict future ecological impacts of climate change. Using a dataset spanning 31 years from a natural population of pied flycatchers (Ficedula hypoleuca), we show that directional selection on timing of reproduction intensified in the first two decades (1980-2000) but weakened during the last decade (2001-2010). Against expectation, this pattern could not be explained by the temporal variation in the phenological mismatch with food abundance. We therefore explored an alternative hypothesis that selection on timing was affected by conditions individuals experience when arriving in spring at the breeding grounds: arriving early in cold conditions may reduce survival. First, we show that in female recruits, spring arrival date in the first breeding year correlates positively with hatch date; hence, early-hatched individuals experience colder conditions at arrival than late-hatched individuals. Second, we show that when temperatures at arrival in the recruitment year were high, early-hatched young had a higher recruitment probability than when temperatures were low. We interpret this as a potential cost of arriving early in colder years, and climate warming may have reduced this cost. We thus show that higher temperatures in the arrival year of recruits were associated with stronger selection for early reproduction in the years these birds were born. As arrival temperatures in the beginning of the study increased, but recently declined again, directional selection on timing of reproduction showed a nonlinear change. We demonstrate that environmental conditions with a lag of up to two years can alter selection on phenological traits in natural populations, something that has important
Bernier, Nicolas; Bracke, Lieven; Malet, Loïc; Godet, Stéphane
2014-12-01
The effect of finish rolling temperature on the austenite-( γ) to-bainite ( α) phase transformation is quantitatively investigated in high-strength C-Mn steels using an alternative crystallographic γ reconstruction procedure, which can be directly applied to experimental electron backscatter diffraction mappings. In particular, the current study aims to clarify the respective contributions of the γ conditioning during the hot rolling and the variant selection during the phase transformation to the inherited texture. The results confirm that the sample finish rolled at the lowest temperature [1102 K (829 °C)] exhibits the sharpest transformation texture. It is shown that this sharp texture is exclusively due to a strong variant selection from parent brass {110}, S {213} and Goss {110} grains, whereas the variant selection from the copper {112} grains is insensitive to the finish rolling temperature. In addition, a statistical variant selection analysis proves that the habit planes of the selected variants do not systematically correspond to the predicted active γ slip planes using the Taylor model. In contrast, a correlation between the Bain group to which the selected variants belong and the finish rolling temperature is clearly revealed, regardless of the parent orientation. These results are discussed in terms of polygranular accommodation mechanisms, especially in view of the observed development in the hot-rolled samples of high-angle grain boundaries with misorientation axes between γ and γ.
Connectivity among subpopulations of Louisiana black bears as estimated by a step selection function
Clark, Joseph D.; Jared S. Laufenberg,; Maria Davidson,; Jennifer L. Murrow,
2015-01-01
Habitat fragmentation is a fundamental cause of population decline and increased risk of extinction for many wildlife species; animals with large home ranges and small population sizes are particularly sensitive. The Louisiana black bear (Ursus americanus luteolus) exists only in small, isolated subpopulations as a result of land clearing for agriculture, but the relative potential for inter-subpopulation movement by Louisiana black bears has not been quantified, nor have characteristics of effective travel routes between habitat fragments been identified. We placed and monitored global positioning system (GPS) radio collars on 8 female and 23 male bears located in 4 subpopulations in Louisiana, which included a reintroduced subpopulation located between 2 of the remnant subpopulations. We compared characteristics of sequential radiolocations of bears (i.e., steps) with steps that were possible but not chosen by the bears to develop step selection function models based on conditional logistic regression. The probability of a step being selected by a bear increased as the distance to natural land cover and agriculture at the end of the step decreased and as distance from roads at the end of a step increased. To characterize connectivity among subpopulations, we used the step selection models to create 4,000 hypothetical correlated random walks for each subpopulation representing potential dispersal events to estimate the proportion that intersected adjacent subpopulations (hereafter referred to as successful dispersals). Based on the models, movement paths for males intersected all adjacent subpopulations but paths for females intersected only the most proximate subpopulations. Cross-validation and genetic and independent observation data supported our findings. Our models also revealed that successful dispersals were facilitated by a reintroduced population located between 2 distant subpopulations. Successful dispersals for males were dependent on natural land
B. Prasad
2016-06-01
Full Text Available In this paper outage performance of a secondary user (SU is evaluated under amplify and forward (AF relay selection scheme with an imperfect channel state information (CSIwhile sharing spectrum in an underlay cognitive radio network (CRN. In underlay, the SU coexists with primary user (PU in the same band provided the interference produced by SU at the PU receiver is below the interference threshold of PU which limits the transmission power of SU and coverage area. Relays help to improve the performance of SU in underlay. However relays are also constrained in transmit power due to interference constraint imposed by PU. Closed form expression of the outage probability of SU with maximum transmit power constraint of relay under imperfect CSI is derived. A scaling factor based power control is used for the SU transmitter and the relay in order to maintain the interference constraint at PU receiver due to imperfect CSI. The impact of different parameters viz. correlation coefficient, channel estimation error, tolerable interference threshold, number of relays and the maximum transmit power constraint of relay on SU performance is investigated. A MATLAB based test bed has also been developed to carry out simulation in order to validate the theoretical result.
Amidi, Salimeh; Mojab, Faraz; Bayandori Moghaddam, Abdolmajid; Tabib, Kimia; Kobarfard, Farzad
2012-01-01
Clinical and Epidemiological studies have shown that a diet rich in fruits and vegetables is associated with a decreased risk of cardiovascular diseases, cancers and other related disorders. These beneficial health effects have been attributed in part to the presence of antioxidants in dietary plants. Therefore screening for antioxidant properties of plant extracts has been one of the interests of scientists in this field. Different screening methods have been reported for the evaluation of antioxidant properties of plant extracts in the literature. In the present research a rapid screening method has been introduced based on cyclic voltammetry for antioxidant screening of some selected medicinal plant extracts. CYCLIC VOLTAMMETRY OF METHANOLIC EXTRACTS OF SEVEN MEDICINAL PLANTS: Buxus hyrcana, Rumex crispus, Achillea millefolium, Zataria multiflora, Ginkgo biloba, Lippia citriodora and Heptaptera anisoptera was carried out at different scan rates. Based on the interpretation of voltammograms, Rumex crispus, Achillea millefolium and Ginkgo biloba showed higher antioxidant capability than the others while Lippia citriodora contained the highest amount of antioxidants. Cyclic voltammetry is expected to be a simple method for screening antioxidants and estimating the antioxidant activity of foods and medicinal plants.
K. Rajeswari
2015-04-01
Full Text Available A novel hybrid channel estimator is proposed for multiple-input multiple-output orthogonal frequency- division multiplexing (MIMO-OFDM system with per-subcarrier transmit antenna selection having optimal power allocation among subcarriers. In practice, antenna selection information is transmitted through a binary symmetric control channel with a crossover probability. Linear minimum mean-square error (LMMSE technique is optimal technique for channel estimation in MIMO-OFDM system. Though LMMSE estimator performs well at low signal to noise ratio (SNR, in the presence of antenna-to-subcarrier-assignment error (ATSA, it introduces irreducible error at high SNR. We have proved that relaxed MMSE (RMMSE estimator overcomes the performance degradation at high SNR. The proposed hybrid estimator combines the benefits of LMMSE at low SNR and RMMSE estimator at high SNR. The vector mean square error (MSE expression is modified as scalar expression so that an optimal power allocation can be performed. The convex optimization problem is formulated and solved to allocate optimal power to subcarriers minimizing the MSE, subject to transmit sum power constraint. Further, an analytical expression for SNR threshold at which the hybrid estimator is to be switched from LMMSE to RMMSE is derived. The simulation results show that the proposed hybrid estimator gives robust performance, irrespective of ATSA error.
B. J. Rashmi
2015-08-01
Full Text Available This paper aims at improving the mechanical behavior of biobased brittle amorphous polylactide (PLA by extrusion melt-blending with biobased semi-crystalline polyamide 11 (PA11 and addition of halloysite nanotubes (HNT. The morphological analysis of the PLA/PA11/HNT blends shows a strong interface between the two polymeric phases due to hydrogen bonding, and the migration of HNTs towards PA11 phase inducing their selective localization in one of the polymeric phases of the blend. A ‘salami-like’ structure is formed revealing a HNTs-rich tubular-like (fibrillar PA11 phase. Moreover, HNTs localized in the dispersed phase act as nucleating agents for PA11. Compared to neat PLA, this leads to a remarkable improvement in tensile and impact properties (elongation at break is multiplied by a factor 43, impact strength by 2, whereas tensile strength and stiffness are almost unchanged. The toughening mechanism is discussed based on the combined effect of resistance to crack propagation and nanotubes load bearing capacity due to the existence of the fibrillar structure. Thus, blending brittle PLA with PA11 and HNT nanotubes results in tailor-made PLA-based compounds with enhanced ductility without sacrificing stiffness and strength.
Kabasawa, M.; Funakawa, Y.; Ogawa, K. [NKK Corp., Tokyo (Japan); Tamura, M. [Kokan Keisoku Co. Ltd., Tokyo (Japan)
1996-11-05
Recently, use of thinner steel sheets was promoted with their higher strength for weight reduction of car body in the car industry, and also use of higher strength steel sheets was proceeded to improve its collision safety. Among such a condition, estimation of strength of the most fundamental single spot welding joint becomes important because body and parts strengths are mainly occupied by the strength of the welded joint. As relationships between shear strength and strength, thickness and nugget diameter of the steel sheets were investigated uptodate and a lot of empirical equations were obtained, a result obtained by numerical analysis was individual, and empirical equations obtained in conventional studies were narrow in their applied regions and could not be forecast for their application limits. In this study, for a joint obtained by a welding condition corresponding to A class of Japan Welding Society standard WES7301, as an object of low carbon steel sheet containing more than 0.03% of carbon widely used for the car body, an experimental equation to estimate tensile shear strength specified in JIS Z3136 from sheet thickness, mother material feature and nugget diameter was induced. 10 refs., 17 figs., 4 tabs.
Li, Juan; Jiang, Yue; Fan, Qi; Chen, Yang; Wu, Ruanqi
2014-05-05
This paper establishes a high-throughput and high selective method to determine the impurity named oxidized glutathione (GSSG) and radial tensile strength (RTS) of reduced glutathione (GSH) tablets based on near infrared (NIR) spectroscopy and partial least squares (PLS). In order to build and evaluate the calibration models, the NIR diffuse reflectance spectra (DRS) and transmittance spectra (TS) for 330 GSH tablets were accurately measured by using the optimized parameter values. For analyzing GSSG or RTS of GSH tablets, the NIR-DRS or NIR-TS were selected, subdivided reasonably into calibration and prediction sets, and processed appropriately with chemometric techniques. After selecting spectral sub-ranges and neglecting spectrum outliers, the PLS calibration models were built and the factor numbers were optimized. Then, the PLS models were evaluated by the root mean square errors of calibration (RMSEC), cross-validation (RMSECV) and prediction (RMSEP), and by the correlation coefficients of calibration (R(c)) and prediction (R(p)). The results indicate that the proposed models have good performances. It is thus clear that the NIR-PLS can simultaneously, selectively, nondestructively and rapidly analyze the GSSG and RTS of GSH tablets, although the contents of GSSG impurity were quite low while those of GSH active pharmaceutical ingredient (API) quite high. This strategy can be an important complement to the common NIR methods used in the on-line analysis of API in pharmaceutical preparations. And this work expands the NIR applications in the high-throughput and extraordinarily selective analysis.
Diógenes Manoel Pedroza de Azevedo
1998-09-01
Full Text Available The present study estimates variances and genetic and phenotypic correlations for five traits in 27 progenies of cashew trees (Anacardium occidentale L.. Data were obtained from a trial conducted in 1992 at Pacajus, Ceará, experimental station of Embrapa Agroindústria Tropical. The characters studied were plant height (PH, North-South and East-West canopy spreads (NSS, EWS, and primary and secondary branch numbers (PBN, SBN. All genetic and phenotypic correlations presented positive and significant values. Selection to increase or decrease the average of any one of the five characteristics of cashew plants in the progenies studied affected the average of the others. The 16-month-old canopy spread can be predicted from NSS or EWS since correlations between them were high. Correlations between PH and SBN were low, indicating that there is a good possibility of obtaining smaller plants without causing drastic reductions in SBN. PH and SBN showed, respectively, the lowest and highest genetic variance estimates relative to the corresponding population means.Neste trabalho são estimadas variâncias, correlações genéticas e fenotípicas e respostas correlacionadas, envolvendo cinco caracteres em 27 progênies de cajueiro (Anacardium occidentale L.. Os dados foram obtidos em Pacajus-CE, num ensaio conduzido no Campo Experimental da Embrapa Agroindústria Tropical, em l992. Os caracteres estudados foram altura de planta (PH, envergaduras norte-sul (NSS e leste-oeste (EWS e número de ramos primários (PBN e secundários (SBN. Todas as correlacões genéticas e fenotípicas obtidas foram positivas e significativas. A seleção para aumentar ou reduzir a média de qualquer um dos cinco caracteres estudados nas progênies de cajueiro afetou indiretamente a média dos outros quatro caracteres. A envergadura da copa aos 16 meses pode ser representada por NSS ou EWS, tendo em vista que a correlação entre elas foi elevada. As correlações envolvendo PH
Estimation of sediment sources using selected chemical tracers in the Perry lake basin, Kansas, USA
Juracek, K.E.; Ziegler, A.C.
2009-01-01
The ability to achieve meaningful decreases in sediment loads to reservoirs requires a determination of the relative importance of sediment sources within the contributing basins. In an investigation of sources of fine-grained sediment (clay and silt) within the Perry Lake Basin in northeast Kansas, representative samples of channel-bank sources, surface-soil sources (cropland and grassland), and reservoir bottom sediment were collected, chemically analyzed, and compared. The samples were sieved to isolate the nutrients (total nitrogen and total phosphorus), organic and total carbon, 25 trace elements, and the radionuclide cesium-137 (137Cs). On the basis of substantial and consistent compositional differences among the source types, total nitrogen (TN), total phosphorus (TP), total organic carbon (TOC), and 137Cs were selected for use in the estimation of sediment sources. To further account for differences in particle-size composition between the sources and the reservoir bottom sediment, constituent ratio and clay-normalization techniques were used. Computed ratios included TOC to TN, TOC to TP, and TN to TP. Constituent concentrations (TN, TP, TOC) and activities (137Cs) were normalized by dividing by the percentage of clay. Thus, the sediment-source estimations involved the use of seven sediment-source indicators. Within the Perry Lake Basin, the consensus of the seven indicators was that both channel-bank and surface-soil sources were important in the Atchison County Lake and Banner Creek Reservoir subbasins, whereas channel-bank sources were dominant in the Mission Lake subbasin. On the sole basis of 137Cs activity, surface-soil sources contributed the most fine-grained sediment to Atchison County Lake, and channel-bank sources contributed the most fine-grained sediment to Banner Creek Reservoir and Mission Lake. Both the seven-indicator consensus and 137Cs indicated that channel-bank sources were dominant for Perry Lake and that channel-bank sources
Rashmi, Baralu Jagannatha; Prashantha, Kalappa; Lacrampe, Marie-France; Krawczak, Patricia
2016-03-01
This paper aims at improving the mechanical behavior of biobased brittle amorphous polylactide (PLA) by extrusion melt-blending with biobased semi-crystalline polyamide 11 (PA 11) and addition of natural halloysite nanotubes (HNT). The structure and properties of PLA/PA11/HNT blends were studied in terms of morphological, thermal and mechanical properties. The morphological analysis of the PLA/PA11/HNT blends shows a strong interface between the two polymeric phases due to hydrogen bonding, and the migration of HNTs towards PA 11 phase inducing their selective localization in one of the polymeric phases of the blend. A "salami-like" structure is formed revealing a HNTs-rich tubular-like (fibrillar) PA11 phase. Moreover, HNTs localized in the dispersed phase acts as nucleating agents for PA11. Blending PLA (80 wt.%) and PA11 (20 wt.%) increases PLA ductility (elongation at break, ɛr, is multiplied by more than 20), however at the slight expense of strength and stiffness. Further addition of HNTs (2 wt.%) further increases ductility (ɛr reaches 155 %, i.e. it is multiplied by more than 40) whereas tensile strength and modulus of PLA are unchanged and impact strength is more than doubled. The toughening mechanism is discussed based on the combined effect of resistance to crack propagation and nanotubes load bearing capacity due to the existence of the fibrillar structure. Thus, blending brittle PLA with PA11 and HNT nanotubes results in tailor-made PLA-based compounds with enhanced ductility without sacrificing stiffness and strength.
Marker Assisted Selection can Reduce True as well as Pedigree Estimated Inbreeding
Pedersen, L D; Sørensen, A C; Berg, P
2009-01-01
This study investigated whether selection using genotype information reduced the rate and level of true inbreeding, that is, identity by descent, at a selectively neutral locus as well as a locus under selection compared with traditional BLUP selection. In addition, the founder representation...... at these loci and the within-family selection at the nonneutral locus were studied. The study was carried out using stochastic simulation of a population resembling the breeding nucleus of a dairy cattle population for 25 yr. Each year, 10 proven bulls were selected across herds along with 100 dams from within...
Lead Contamination in Selected Foods from Riyadh City Market and Estimation of the Daily Intake
Zeid A. Al Othman
2010-10-01
Full Text Available This study was carried out to determine lead contamination in 104 of the representative food items in the Saudi diet and to estimate the dietary lead intake of Saudi Arabians. Three samples of each selected food items were purchased from the local markets of Riyadh city, the capital of Saudi Arabia. Each pooled sample was analyzed in triplicate by ICP-AES after thorough homogenization. Sweets (0.011–0.199 μg/g, vegetables (0.002–0.195 μg/g, legumes (0.014–0.094 μg/g, eggs (0.079 μg/g, meat and meat products (0.013–0.068 μg/g were the richest sources of lead. Considering the amounts of each food consumed, the major food sources of lead intake for Saudi can be arranged as follows: vegetables (25.4%, cereal and cereal products (24.2%, beverages (9.7% sweets (8.2%, legumes (7.4%, fruits (5.4% milk and milk products (5.1%. The daily intake of lead was calculated taking into account the concentration of this element in the edible part of the daily consumption data which were derived from two sources, (a the KSA food sheet provided by the Food and Agriculture Organization (FAO and (b from questionnaires distributed among 300 families in Riyadh city. The results showed that the daily intakes of lead according to the two sources are 22.7 and 24.5 μg/person/day respectively, which are lower than that mentioned by The Joint Expert Committee on Food Additives (JECFA, whereas it is comprabale with that of other countries.
Takayama, T.; Iwasaki, A.
2016-06-01
Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.
T. Takayama
2016-06-01
Full Text Available Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.
Bykov, D. L.; Konovalov, D. N.
2007-12-01
local stress concentration region, we consider the problem about the strain of a viscoelastic specimen used to determine the standard adhesion strength characteristics. The problem is solved numerically under the following assumptions: the specimen material is assumed to be linearly viscoelastic, and the specific absorbed energy in the stress concentration region is assumed to coincide in magnitude with the specific scattered energy. To estimate the accuracy of the numerical method, we use the solution of the model problem about the action of a plane circular die on a half-space consisting of a linearly viscoelastic incompressible material.
Shamsuddin, Shomon
2016-01-01
Many students enroll in less selective colleges than they are qualified to attend, despite low graduation rates at these institutions. Some scholars have argued that qualified students should enroll in the most selective colleges because they have greater resources to support student success. However, selective college attendance is endogenous, so…
Morrow, Connie E.; Accornero, Veronica H.; Xue, Lihua; Manjunath, Sudha; Culbertson, Jan L.; Anthony, James C.; Bandstra, Emmalee S.
2009-01-01
We estimated childhood risk of developing selected DSM-IV Disorders, including Attention-Deficit Hyperactivity Disorder (ADHD), Oppositional Defiant Disorder (ODD), and Separation Anxiety Disorder (SAD), in children with prenatal cocaine exposure (PCE). Children were enrolled prospectively at birth (n = 476) with prenatal drug exposures documented…
Berger, Lawrence M.; Bruch, Sarah K.; Johnson, Elizabeth I.; James, Sigrid; Rubin, David
2009-01-01
This study used data on 2,453 children aged 4-17 from the National Survey of Child and Adolescent Well-Being and 5 analytic methods that adjust for selection factors to estimate the impact of out-of-home placement on children's cognitive skills and behavior problems. Methods included ordinary least squares (OLS) regressions and residualized…
Northcutt Sally L
2010-04-01
Full Text Available Abstract Background Molecular estimates of breeding value are expected to increase selection response due to improvements in the accuracy of selection and a reduction in generation interval, particularly for traits that are difficult or expensive to record or are measured late in life. Several statistical methods for incorporating molecular data into breeding value estimation have been proposed, however, most studies have utilized simulated data in which the generated linkage disequilibrium may not represent the targeted livestock population. A genomic relationship matrix was developed for 698 Angus steers and 1,707 Angus sires using 41,028 single nucleotide polymorphisms and breeding values were estimated using feed efficiency phenotypes (average daily feed intake, residual feed intake, and average daily gain recorded on the steers. The number of SNPs needed to accurately estimate a genomic relationship matrix was evaluated in this population. Results Results were compared to estimates produced from pedigree-based mixed model analysis of 862 Angus steers with 34,864 identified paternal relatives but no female ancestors. Estimates of additive genetic variance and breeding value accuracies were similar for AFI and RFI using the numerator and genomic relationship matrices despite fewer animals in the genomic analysis. Bootstrap analyses indicated that 2,500-10,000 markers are required for robust estimation of genomic relationship matrices in cattle. Conclusions This research shows that breeding values and their accuracies may be estimated for commercially important sires for traits recorded in experimental populations without the need for pedigree data to establish identity by descent between members of the commercial and experimental populations when at least 2,500 SNPs are available for the generation of a genomic relationship matrix.
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom;
2016-01-01
time. Additionally, we show via three common examples how the grid size depends on parameters such as the number of data points or the number of sensors in DOA estimation. We also demonstrate that the computation time can potentially be lowered by several orders of magnitude by combining a coarse grid......In many spectral estimation and array processing problems, the process of finding estimates of model parameters often involves the optimisation of a cost function containing multiple peaks and dips. Such non-convex problems are hard to solve using traditional optimisation algorithms developed...
Sofiev, Mikhail; Soares, Joana; Kouznetsov, Rostislav; Vira, Julius; Prank, Marje
2016-04-01
Top-down emission estimation via inverse dispersion modelling is used for various problems, where bottom-up approaches are difficult or highly uncertain. One of such areas is the estimation of emission from wild-land fires. In combination with dispersion modelling, satellite and/or in-situ observations can, in principle, be used to efficiently constrain the emission values. This is the main strength of the approach: the a-priori values of the emission factors (based on laboratory studies) are refined for real-life situations using the inverse-modelling technique. However, the approach also has major uncertainties, which are illustrated here with a few examples of the Integrated System for wild-land Fires (IS4FIRES). IS4FIRES generates the smoke emission and injection profile from MODIS and SEVIRI active-fire radiative energy observations. The emission calculation includes two steps: (i) initial top-down calibration of emission factors via inverse dispersion problem solution that is made once using training dataset from the past, (ii) application of the obtained emission coefficients to individual-fire radiative energy observations, thus leading to bottom-up emission compilation. For such a procedure, the major classes of uncertainties include: (i) imperfect information on fires, (ii) simplifications in the fire description, (iii) inaccuracies in the smoke observations and modelling, (iv) inaccuracies of the inverse problem solution. Using examples of the fire seasons 2010 in Russia, 2012 in Eurasia, 2007 in Australia, etc, it is pointed out that the top-down system calibration performed for a limited number of comparatively moderate cases (often the best-observed ones) may lead to errors in application to extreme events. For instance, the total emission of 2010 Russian fires is likely to be over-estimated by up to 50% if the calibration is based on the season 2006 and fire description is simplified. Longer calibration period and more sophisticated parameterization
Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.
Olejnik, Stephen F.; Algina, James
1987-01-01
Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)
Choi, Sukhwan; Li, C. James
2006-09-01
Gears are common power transmission elements and are frequently responsible for transmission failures. Since a tooth crack is not directly measurable while a gear is in operation, one has to develop an indirect method to estimate its size from some measurables. This study developed such a method to estimate the size of a tooth transverse crack for a spur gear in operation. Using gear vibrations measured from an actual gear accelerated test, this study examined existing gear condition indices to identify those correlated well to crack size and established their utility for crack size estimation through index fusion using a neural network. When tested with vibrations measured from another accelerated test, the method had an averaged estimation error of about 5%.
Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.
Olejnik, Stephen F.; Algina, James
1987-01-01
Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)
On Frequency Offset Estimation Using the iNET Preamble in Frequency Selective Fading Channels
2014-03-01
ASM fields; (bottom) the relationship between the indexes of the received samples r(n), the signal samples s(n), the preamble samples p (n) and the short...frequency offset estimators for SOQPSK-TG equipped with the iNET preamble and operating in ISI channels. Four of the five estimators exam - ined here are...sync marker ( ASM ), and data bits (an LDPC codeword). The availability of a preamble introduces the possibility of data-aided synchro- nization in
Bush, Stephen J; McCulloch, Mary E B; Summers, Kim M; Hume, David A; Clark, Emily L
2017-06-13
The availability of fast alignment-free algorithms has greatly reduced the computational burden of RNA-seq processing, especially for relatively poorly assembled genomes. Using these approaches, previous RNA-seq datasets could potentially be processed and integrated with newly sequenced libraries. Confounding factors in such integration include sequencing depth and methods of RNA extraction and selection. Different selection methods (typically, either polyA-selection or rRNA-depletion) omit different RNAs, resulting in different fractions of the transcriptome being sequenced. In particular, rRNA-depleted libraries sample a broader fraction of the transcriptome than polyA-selected libraries. This study aimed to develop a systematic means of accounting for library type that allows data from these two methods to be compared. The method was developed by comparing two RNA-seq datasets from ovine macrophages, identical except for RNA selection method. Gene-level expression estimates were obtained using a two-part process centred on the high-speed transcript quantification tool Kallisto. Firstly, a set of reference transcripts was defined that constitute a standardised RNA space, with expression from both datasets quantified against it. Secondly, a simple ratio-based correction was applied to the rRNA-depleted estimates. The outcome is an almost perfect correlation between gene expression estimates, independent of library type and across the full range of levels of expression. A combination of reference transcriptome filtering and a ratio-based correction can create equivalent expression profiles from both polyA-selected and rRNA-depleted libraries. This approach will allow meta-analysis and integration of existing RNA-seq data into transcriptional atlas projects.
Souza-Junior, Eduardo José; Araújo, Cíntia Tereza Pimenta; Prieto, Lúcia Trazzi; Paulillo, Luís Alexandre Maffei Sartini
2012-11-01
The aim of this study was to evaluate the influence of the LED curing unit and selective enamel etching on dentin microtensile bond strength (μTBS) for self-etch adhesives in class I composite restorations. On 96 human molars, box-shaped class I cavities were made maintaining enamel margins. Self-etch adhesives (Clearfil SE - CSE and Clearfil S(3) - S3) were used to bond a microhybrid composite. Before adhesive application, half of the teeth were enamel acid-etched and the other half was not. Adhesives and composites were cured with the following light curing units (LCUs): one polywave (UltraLume 5 - UL) and two single-peak (FlashLite 1401 - FL and Radii Cal - RD) LEDs. The specimens were then submitted to thermomechanical aging and longitudinally sectioned to obtain bonded sticks (0.9 mm(2)) to be tested in tension at 0.5 mm/min. The failure mode was then recorded. The μTBS data were submitted to a three-way ANOVA and Tukey's (α = 0.05). For S3, the selective enamel-etching provided lower μTBS values (20.7 ± 2.7) compared to the non-etched specimens (26.7 ± 2.2). UL yielded higher μTBS values (24.1 ± 3.2) in comparison to the photoactivation approach with FL (18.8 ±3.9) and RD (19.9 ±1.8) for CSE. The two-step CSE was not influenced by the enamel etching (p ≥ 0.05). Enamel acid etching in class I composite restorations affects the dentin μTBS of the one-step self-etch adhesive Clearfil S(3), with no alterations for Clearfil SE bond strength. The polywave LED promoted better bond strength for the two-step adhesive compared to the single-peak ones.
Simultaneous confidence bands for Yule-Walker estimators and order selection
Jirak, Moritz
2012-01-01
Let $\\{X_k,k\\in{\\mathbb{Z}}\\}$ be an autoregressive process of order $q$. Various estimators for the order $q$ and the parameters ${\\bolds \\Theta}_q=(\\theta_1,...,\\theta_q)^T$ are known; the order is usually determined with Akaike's criterion or related modifications, whereas Yule-Walker, Burger or maximum likelihood estimators are used for the parameters ${\\bolds\\Theta}_q$. In this paper, we establish simultaneous confidence bands for the Yule--Walker estimators $\\hat{\\theta}_i$; more precisely, it is shown that the limiting distribution of ${\\max_{1\\leq i\\leq d_n}}|\\hat{\\theta}_i-\\theta_i|$ is the Gumbel-type distribution $e^{-e^{-z}}$, where $q\\in\\{0,...,d_n\\}$ and $d_n=\\mathcal {O}(n^{\\delta})$, $\\delta >0$. This allows to modify some of the currently used criteria (AIC, BIC, HQC, SIC), but also yields a new class of consistent estimators for the order $q$. These estimators seem to have some potential, since they outperform most of the previously mentioned criteria in a small simulation study. In particul...
Selecting a spatial resolution for estimation of per-field green leaf area index
Curran, Paul J.; Williamson, H. Dawn
1988-01-01
For any application of multispectral scanner (MSS) data, a user is faced with a number of choices concerning the characteristics of the data; one of these is their spatial resolution. A pilot study was undertaken to determine the spatial resolution that would be optimal for the per-field estimation of green leaf area index (GLAI) in grassland. By reference to empirically-derived data from three areas of grassland, the suitable spatial resolution was hypothesized to lie in the lower portion of a 2-18 m range. To estimate per-field GLAI, airborne MSS data were collected at spatial resolutions of 2 m, 5 m and 10 m. The highest accuracies of per-field GLAI estimation were achieved using MSS data with spatial resolutions of 2 m and 5 m.
Selection of relevant items for decommissioning costing estimation of a PWR using fuzzy logic
Monteiro, Deiglys Borges; Busse, Alexander Lucas; Moreira, Joao M.L.; Maiorino, Jose Rubens, E-mail: deiglys.monteiro@ufabc.edu.br, E-mail: alexlucasb@gmail.com, E-mail: joao.moreira@ufabc.edu.br, E-mail: joserubens.maiorino@ufabc.edu.br [Universidade Federal do ABC (CECS/UFABC), Santo Andre, SP (Brazil). Centro de Engenharia, Modelagem e Ciencias Aplicadas. Programa de Pos-Graduacao em Energia e Engenharia da Energia
2015-07-01
The decommissioning is an important part of a nuclear power plant life cycle which may occur by technical, economical or safety reasons. Decommissioning requires carrying out a large number of tasks that should be planned in advance, involves cost evaluations, preparation of plans of activity and actual operational actions. Despite the large number of tasks, only part of them is relevant for cost estimation purpose. The technical literature and international regulatory agencies suggest a variety of methods for decommissioning cost estimation. Most of them require a very detailed knowledge of the plant and data available suitable for plants that are starting their decommissioning but not for those in the planning stage. The present work aims to apply fuzzy logic to sort out relevant items to cost estimation in order to reduce the work effort involved. The scheme uses parametric equations for specific cost items, and is applied to specific parts of the process of nuclear power plant decommissioning. (author)
Eash, David A.; Barnes, Kimberlee K.
2012-01-01
A statewide study was conducted to develop regression equations for estimating six selected low-flow frequency statistics and harmonic mean flows for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include: the annual 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years, the annual 30-day mean low flow for a recurrence interval of 5 years, and the seasonal (October 1 through December 31) 1- and 7-day mean low flows for a recurrence interval of 10 years. Estimation equations also were developed for the harmonic-mean-flow statistic. Estimates of these seven selected statistics are provided for 208 U.S. Geological Survey continuous-record streamgages using data through September 30, 2006. The study area comprises streamgages located within Iowa and 50 miles beyond the State's borders. Because trend analyses indicated statistically significant positive trends when considering the entire period of record for the majority of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. The median number of years of record used to compute each of these seven selected statistics was 35. Geographic information system software was used to measure 54 selected basin characteristics for each streamgage. Following the removal of two streamgages from the initial data set, data collected for 206 streamgages were compiled to investigate three approaches for regionalization of the seven selected statistics. Regionalization, a process using statistical regression analysis, provides a relation for efficiently transferring information from a group of streamgages in a region to ungaged sites in the region. The three regionalization approaches tested included statewide, regional, and region-of-influence regressions. For the regional regression, the study area was divided into three low-flow regions on the basis of hydrologic
Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro
2011-01-01
In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exp...
Leslie, William D; Orwoll, Eric S; Nielson, Carrie M; Morin, Suzanne N; Majumdar, Sumit R; Johansson, Helena; Odén, Anders; McCloskey, Eugene V; Kanis, John A
2014-11-01
Although increasing body weight has been regarded as protective against osteoporosis and fractures, there is accumulating evidence that fat mass adversely affects skeletal health compared with lean mass. We examined skeletal health as a function of estimated total body lean and fat mass in 40,050 women and 3600 men age ≥50 years at the time of baseline dual-energy X-ray absorptiometry (DXA) testing from a clinical registry from Manitoba, Canada. Femoral neck bone mineral density (BMD), strength index (SI), cross-sectional area (CSA), and cross-sectional moment of inertia (CSMI) were derived from DXA. Multivariable models showed that increasing lean mass was associated with near-linear increases in femoral BMD, CSA, and CSMI in both women and men, whereas increasing fat mass showed a small initial increase in these measurements followed by a plateau. In contrast, femoral SI was relatively unaffected by increasing lean mass but was associated with a continuous linear decline with increasing fat mass, which should predict higher fracture risk. During mean 5-year follow-up, incident major osteoporosis fractures and hip fractures were observed in 2505 women and 180 men (626 and 45 hip fractures, respectively). After adjustment for fracture risk assessment tool (FRAX) scores (with or without BMD), we found no evidence that lean mass, fat mass, or femoral SI affected prediction of major osteoporosis fractures or hip fractures. Findings were similar in men and women, without significant interactions with sex or obesity. In conclusion, skeletal adaptation to increasing lean mass was positively associated with BMD but had no effect on femoral SI, whereas increasing fat mass had no effect on BMD but adversely affected femoral SI. Greater fat mass was not independently associated with a greater risk of fractures over 5-year follow-up. FRAX robustly predicts fractures and was not affected by variations in body composition.
Matheus Costa dos Reis
2014-01-01
Full Text Available This study was carried out to obtain the estimates of genetic variance and covariance components related to intra- and interpopulation in the original populations (C0 and in the third cycle (C3 of reciprocal recurrent selection (RRS which allows breeders to define the best breeding strategy. For that purpose, the half-sib progenies of intrapopulation (P11 and P22 and interpopulation (P12 and P21 from populations 1 and 2 derived from single-cross hybrids in the 0 and 3 cycles of the reciprocal recurrent selection program were used. The intra- and interpopulation progenies were evaluated in a 10×10 triple lattice design in two separate locations. The data for unhusked ear weight (ear weight without husk and plant height were collected. All genetic variance and covariance components were estimated from the expected mean squares. The breakdown of additive variance into intrapopulation and interpopulation additive deviations (στ2 and the covariance between these and their intrapopulation additive effects (CovAτ found predominance of the dominance effect for unhusked ear weight. Plant height for these components shows that the intrapopulation additive effect explains most of the variation. Estimates for intrapopulation and interpopulation additive genetic variances confirm that populations derived from single-cross hybrids have potential for recurrent selection programs.
Hongjun Xu
2011-07-01
Full Text Available A channel and delay estimation algorithm for both positive and negative delay, based on the distributed Alamouti scheme, has been recently discussed for base-station–based asynchronous cooperative systems in frequency-flat fading channels. This paper extends the algorithm, the maximum likelihood estimator, to work in frequency-selective fading channels. The minimum mean square error (MMSE performance of channel estimation for both packet schemes and normal schemes is discussed in this paper. The symbol error rate (SER performance of equalisation and detection for both time-reversal space-time block code (STBC and single-carrier STBC is also discussed in this paper. The MMSE simulation results demonstrated the superior performance of the packet scheme over the normal scheme with an improvement in performance of up to 6 dB when feedback was used in the frequency-selective channel at a MSE of 3 x 10^{–2}. The SER simulation results showed that, although both the normal and packet schemes achieved similar diversity orders, the packet scheme demonstrated a 1 dB coding gain over the normal scheme at a SER of 10^{–5}. Finally, the SER simulations showed that the frequency-selective fading system outperformed the frequency-flat fading system.
Hartley, J.A.; Forrow, S.M.; Souhami, R.L. (Univ. College and Middlesex School of Medicine, London (England))
1990-03-27
Large variations in alkylation intensities exist among guanines in a DNA sequence following treatment with chemotherapeutic alkylating agents such as nitrogen mustards, and the substituent attached to the reactive group can impose a distinct sequence preference for reaction. In order to understand further the structural and electrostatic factors which determine the sequence selectivity of alkylation reactions, the effect of increase ionic strength, the intercalator ethidium bromide, AT-specific minor groove binders distamycin A and netropsin, and the polyamine spermine on guanine N7-alkylation by L-phenylalanine mustard (L-Pam), uracil mustard (UM), and quinacrine mustard (QM) was investigated with a modification of the guanine-specific chemical cleavage technique for DNA sequencing. The result differed with both the nitrogen mustard and the cationic agent used. The effect, which resulted in both enhancement and suppression of alkylation sites, was most striking in the case of netropsin and distamycin A, which differed from each other. DNA footprinting indicated that selective binding to AT sequences in the minor groove of DNA can have long-range effects on the alkylation pattern of DNA in the major groove.
United Nations Industrial Development Organization, Vienna (Austria).
The need to develop managerial and technical personnel in the cement, fertilizer, pulp and paper, sugar, leather and shoe, glass, and metal processing industries of various nations was studied, with emphasis on necessary steps in developing nations to relate occupational requirements to technology, processes, and scale of output. Estimates were…
Student Sorting and Bias in Value-Added Estimation: Selection on Observables and Unobservables
Rothstein, Jesse
2009-01-01
Nonrandom assignment of students to teachers can bias value-added estimates of teachers' causal effects. Rothstein (2008, 2010) shows that typical value-added models indicate large counterfactual effects of fifth-grade teachers on students' fourth-grade learning, indicating that classroom assignments are far from random. This article quantifies…
Potential-scour assessments and estimates of maximum scour at selected bridges in Iowa
Fischer, E.E.
1995-01-01
The results of potential-scour assessments at 130 bridges and estimates of maximum scour at 10 bridges in Iowa are presented. All of the bridges evaluated in the study are constructed bridges (not culverts) that are sites of active or discontinued streamflow-gaging stations and peak-stage measurement sites. The period of the study was from October 1991 to September 1994.
Estimation of selenium intake in Switzerland in relation to selected food groups.
Jenny-Burri, J; Haldimann, M; Dudler, V
2010-11-01
The selenium concentration in foods was analysed in order to identify principal sources of this trace element in Switzerland. Selenium intake estimations based on three different approaches were carried out. From the relationship between intake and serum/plasma concentration, the selenium intake was estimated to 66 µg day(-1). The second approach based on measured food groups combined with consumption statistics; and the third approach consisted of duplicate meal samples. With the last two methods, over 75% of the serum/plasma based intake was confirmed. Swiss pasta made of North American durum wheat was the food with the highest contribution to the dietary intake, followed by meat. The strong decrease in imports of selenium-rich North American wheat of the last years was not reflected in the present intake estimations. It appears that this intake loss was compensated by a consumption increase of other foods. Compared with former intake estimations, selenium intake seems to be in Switzerland nearly constant for the last 25 years.
Use of expert judgment elicitation to estimate seismic vulnerability of selected building types
Jaiswal, K.S.; Aspinall, W.; Perkins, D.; Wald, D.; Porter, K.A.
2012-01-01
Pooling engineering input on earthquake building vulnerability through an expert judgment elicitation process requires careful deliberation. This article provides an overview of expert judgment procedures including the Delphi approach and the Cooke performance-based method to estimate the seismic vulnerability of a building category.
Threshold Selection for Ultra-Wideband TOA Estimation based on Neural Networks
Xue-rong Cui
2012-09-01
Full Text Available Because of the good penetration into many common materials and inherent fine resolution, Ultra-Wideband (UWB signals are widely used in remote ranging and positioning applications. On the other hand, because of the high sampling rate, coherent Time of Arrival (TOA estimation algorithms are not practical for low cost, low complexity UWB systems. In order to improve the precision of TOA estimation, an Energy Detection (ED based non-coherent TOA estimation algorithm using Artificial Neural Networks (ANN is presented which is based on the skewness after energy detection. The expected values of skewness and kurtosis with respect to the Signal to Noise Ratio (SNR are investigated. It is shown that the skewness is more suitable for TOA estimation. The best threshold values for different SNRs are investigated and the effects of integration period and channel modes are examined. Comparisons with other ED based algorithms show that in CM1 and CM2 channels, the proposed algorithm provides higher precision and robustness in both high and low SNR environments.
Martínez-Baz, Iván; Guevara, Marcela; Elía, Fernando; Ezpeleta, Carmen; Fernández Alonso, Mirian; Castilla, Jesús
2014-01-01
To estimate the effectiveness of the influenza vaccine under different criteria for selecting patients for swabbing. A case-control study was performed of laboratory-confirmed cases (n=909) and negative controls for influenza (n=732) in the 2010-2011 to 2012-2013 seasons in Navarre (Spain). The adjusted vaccine effectiveness was estimated by including all swabs from patients with influenza-like-illness and selecting only the first two cases per physician and week. The first two patients per physician and week were less frequently vaccinated against influenza (7.9% vs. 12.5%, p=0.021) and less often received confirmation of influenza (53.6% vs. 66.4%, p <0.001) than subsequent patients. These differences decreased after adjustment for covariates. The effectiveness of the influenza vaccine was 49% (95% CI: 23-66%) when all swabs were included and was 55% (95% CI: 27-72%) when we selected the first two swabs per week and physician. The selection of the first two patients per physician and week may bias assessment of the effectiveness of the influenza vaccine, although this bias was small in the seasons analyzed. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.
Selectivity experiments to estimate the effect of escape windows in the Skagerak roundfish fishery
Madsen, Niels; Stæhr, Karl-Johan
2005-01-01
The objectives were to measure roundfish selectivity and to test if square-mesh windows inserted in the codend could improve the selectivity. Sea trials were conducted with a commercial trawler in the Skagerak area. Three codend types were tested: (1) a standard codend with 104 mm meshes; (2......) a standard 104 mm codend with two 85 rum square-mesh side windows; (3) a standard 104 mm codend with an 85-mm square-mesh top window. The twin-trawl method was used where one side of the rig had a 35-mm (nominal mesh size) control codend. Hauls of each codend were fitted simultaneously in a fixed- and random...
Al-Murad, Tamim M.
2010-06-01
In densely deployed sensor networks, correlation among measurements may be high. Spatial sampling through node selection is usually used to minimize this correlation and to save energy consumption. However because of the fading nature of the wireless channels, extra care should be taken when performing this sampling. In this paper, we develop expressions for the distortion which include the channel effects. The asymptotic behavior of the distortion as the number of sensors or total transmit power increase without bound is also investigated. Further, based on the channel and position information we propose and test several node selection schemes.
Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam
2012-01-01
Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ≥ 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ≥ 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification
Estimation of magnitudes of debris flows in selected torrential watersheds in Slovenia
Sodnik, Jošt; Mikoš, Matjaž
2006-01-01
In this paper the application of different methods for estimation of magnitudes of rainfall-induced debris flows in 18 torrents in the Upper Sava River valley, NW Slovenia, and in 2 torrents in Pohorje, N Slovenia is described. Additional verification of the methods was performed in the torrential watersheds with active debris flows in the recent past (Predelica and Brusnik in the Soca River basin, W Slovenia). For some of the methods, the knowledge of morphometric characteristics of a torren...
Schoeni, Robert F.; Wiemers, Emily E.
2015-01-01
Numerous studies have estimated a high intergenerational correlation in economic status. Such studies do not typically attend to potential biases that may arise due to survey attrition. Using the Panel Study of Income Dynamics – the data source most commonly used in prior studies – we demonstrate that attrition is particularly high for low-income adult children with low-income parents and particularly low for high-income adult children with high-income parents. Because of this pattern of attr...
Khitrov, V A
2001-01-01
The new, model-independent method to estimate simultaneously the level densities excited in the (n,gamma) reaction and the radiative strength functions of dipole transitions is developed. The method can be applied for any nucleus and reaction followed by cascade gamma-emission. It is just necessary to measure the intensities of two-step gamma-cascades depopulating one or several high-excited states and determine the quanta ordering in the main portion of the observed cascades. The method provides a sufficiently narrow interval of most probable densities of levels with given J suppi and radiative strength functions of dipole transitions populating them.
Estimation of engineering properties of selected tuffs by using grain/matrix ratio
Korkanç, Mustafa; Solak, Burak
2016-08-01
Petrographic properties of rocks substantially affect their physical and mechanical properties. In the present study, for the purpose of examining the relationship between the petrographic and geomechanical properties of pyroclastic rocks, fresh samples were taken from tuffs of different textural properties that have wide distribution in Cappadocia region. Experimental studies were conducted on 20 fresh samples to determine their engineering properties through petrographic examinations. Dry and saturated unit weights, water absorption by weight, effective porosity, capillary water absorption, slake durability index, P-wave velocity, point load index, uniaxial compressive strength and nail penetration index of the samples were determined. Higher geomechanical values were obtained from the samples of Kavak tuffs affected by hydromechanical alteration and by tuffs with high welded rates. On thin sections prepared with the fresh samples, petrographic studies were carried out by using a point counter with a polarizing microscope, and mineral composition, texture, void ratio, volcanic glass presence and state of these fragments within the rock, secondary mineral formation and opaque mineral presence were determined. Grain/matrix ratio (GMR) was calculated by using the ratios of phenocrysts, microlites, volcanic glass, voids and opaque minerals after point counting on thin sections. A potential relationship between the petrographic and geomechanical properties of fresh samples was tried to determine by counting correlation analysis. Such a relationship can be significantly and extensively suggestible for engineering applications. For this purpose, we used the poorly-welded Kavak and densely-welded Kızılkaya tuff samples in our study.
Koltun, G.F.; Kula, Stephanie P.
2013-01-01
This report presents the results of a study to develop methods for estimating selected low-flow statistics and for determining annual flow-duration statistics for Ohio streams. Regression techniques were used to develop equations for estimating 10-year recurrence-interval (10-percent annual-nonexceedance probability) low-flow yields, in cubic feet per second per square mile, with averaging periods of 1, 7, 30, and 90-day(s), and for estimating the yield corresponding to the long-term 80-percent duration flow. These equations, which estimate low-flow yields as a function of a streamflow-variability index, are based on previously published low-flow statistics for 79 long-term continuous-record streamgages with at least 10 years of data collected through water year 1997. When applied to the calibration dataset, average absolute percent errors for the regression equations ranged from 15.8 to 42.0 percent. The regression results have been incorporated into the U.S. Geological Survey (USGS) StreamStats application for Ohio (http://water.usgs.gov/osw/streamstats/ohio.html) in the form of a yield grid to facilitate estimation of the corresponding streamflow statistics in cubic feet per second. Logistic-regression equations also were developed and incorporated into the USGS StreamStats application for Ohio for selected low-flow statistics to help identify occurrences of zero-valued statistics. Quantiles of daily and 7-day mean streamflows were determined for annual and annual-seasonal (September–November) periods for each complete climatic year of streamflow-gaging station record for 110 selected streamflow-gaging stations with 20 or more years of record. The quantiles determined for each climatic year were the 99-, 98-, 95-, 90-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 25-, 20-, 10-, 5-, 2-, and 1-percent exceedance streamflows. Selected exceedance percentiles of the annual-exceedance percentiles were subsequently computed and tabulated to help facilitate consideration of the
Wang, Quanchao; Yu, Yang; Li, Fuhua; Zhang, Xiaojun; Xiang, Jianhai
2017-09-01
Genomic selection (GS) can be used to accelerate genetic improvement by shortening the selection interval. The successful application of GS depends largely on the accuracy of the prediction of genomic estimated breeding value (GEBV). This study is a first attempt to understand the practicality of GS in Litopenaeus vannamei and aims to evaluate models for GS on growth traits. The performance of GS models in L. vannamei was evaluated in a population consisting of 205 individuals, which were genotyped for 6 359 single nucleotide polymorphism (SNP) markers by specific length amplified fragment sequencing (SLAF-seq) and phenotyped for body length and body weight. Three GS models (RR-BLUP, BayesA, and Bayesian LASSO) were used to obtain the GEBV, and their predictive ability was assessed by the reliability of the GEBV and the bias of the predicted phenotypes. The mean reliability of the GEBVs for body length and body weight predicted by the different models was 0.296 and 0.411, respectively. For each trait, the performances of the three models were very similar to each other with respect to predictability. The regression coefficients estimated by the three models were close to one, suggesting near to zero bias for the predictions. Therefore, when GS was applied in a L. vannamei population for the studied scenarios, all three models appeared practicable. Further analyses suggested that improved estimation of the genomic prediction could be realized by increasing the size of the training population as well as the density of SNPs.
Silva, Felipe O; Hemerly, Elder M; Leite Filho, Waldemar C
2017-02-23
This paper presents the second part of a study aiming at the error state selection in Kalman filters applied to the stationary self-alignment and calibration (SSAC) problem of strapdown inertial navigation systems (SINS). The observability properties of the system are systematically investigated, and the number of unobservable modes is established. Through the analytical manipulation of the full SINS error model, the unobservable modes of the system are determined, and the SSAC error states (except the velocity errors) are proven to be individually unobservable. The estimability of the system is determined through the examination of the major diagonal terms of the covariance matrix and their eigenvalues/eigenvectors. Filter order reduction based on observability analysis is shown to be inadequate, and several misconceptions regarding SSAC observability and estimability deficiencies are removed. As the main contributions of this paper, we demonstrate that, except for the position errors, all error states can be minimally estimated in the SSAC problem and, hence, should not be removed from the filter. Corroborating the conclusions of the first part of this study, a 12-state Kalman filter is found to be the optimal error state selection for SSAC purposes. Results from simulated and experimental tests support the outlined conclusions.
Wang, Quanchao; Yu, Yang; Li, Fuhua; Zhang, Xiaojun; Xiang, Jianhai
2016-10-01
Genomic selection (GS) can be used to accelerate genetic improvement by shortening the selection interval. The successful application of GS depends largely on the accuracy of the prediction of genomic estimated breeding value (GEBV). This study is a first attempt to understand the practicality of GS in Litopenaeus vannamei and aims to evaluate models for GS on growth traits. The performance of GS models in L. vannamei was evaluated in a population consisting of 205 individuals, which were genotyped for 6 359 single nucleotide polymorphism (SNP) markers by specific length amplified fragment sequencing (SLAF-seq) and phenotyped for body length and body weight. Three GS models (RR-BLUP, BayesA, and Bayesian LASSO) were used to obtain the GEBV, and their predictive ability was assessed by the reliability of the GEBV and the bias of the predicted phenotypes. The mean reliability of the GEBVs for body length and body weight predicted by the different models was 0.296 and 0.411, respectively. For each trait, the performances of the three models were very similar to each other with respect to predictability. The regression coefficients estimated by the three models were close to one, suggesting near to zero bias for the predictions. Therefore, when GS was applied in a L. vannamei population for the studied scenarios, all three models appeared practicable. Further analyses suggested that improved estimation of the genomic prediction could be realized by increasing the size of the training population as well as the density of SNPs.
Felipe O. Silva
2017-02-01
Full Text Available This paper presents the second part of a study aiming at the error state selection in Kalman filters applied to the stationary self-alignment and calibration (SSAC problem of strapdown inertial navigation systems (SINS. The observability properties of the system are systematically investigated, and the number of unobservable modes is established. Through the analytical manipulation of the full SINS error model, the unobservable modes of the system are determined, and the SSAC error states (except the velocity errors are proven to be individually unobservable. The estimability of the system is determined through the examination of the major diagonal terms of the covariance matrix and their eigenvalues/eigenvectors. Filter order reduction based on observability analysis is shown to be inadequate, and several misconceptions regarding SSAC observability and estimability deficiencies are removed. As the main contributions of this paper, we demonstrate that, except for the position errors, all error states can be minimally estimated in the SSAC problem and, hence, should not be removed from the filter. Corroborating the conclusions of the first part of this study, a 12-state Kalman filter is found to be the optimal error state selection for SSAC purposes. Results from simulated and experimental tests support the outlined conclusions.
Method of estimation of selected hens welfare elements in an alternative maintenance system
Zbigniew Domagalski
2012-09-01
Full Text Available Broody hens welfare in an alternative maintenance system was the focus of the undertaken attempt. Both behavioural and health aspects, as well as numerous physiological indicators are taken into consideration while applying animal welfare estimation methods. The described evaluation complies with the EU recommendations for the applied spot method TG-200. It serves to evaluate the conformity of both buildings and technology of breeding with the requirements of regulations and those defined in the process of the carried out hens research. The following functional areas were focused on while assessing the welfare: terms of mobility, feed intake, herding behaviour, resting conditions, comfort of living, nesting conditions, herd health and care.
Howe, Chanelle J; Cole, Stephen R; Chmiel, Joan S; Muñoz, Alvaro
2011-03-01
In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984-2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed.
Lamb mode selection for accurate wall loss estimation via guided wave tomography
Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P. [Department of Mechanical Engineering, Imperial College, London, SW7 2AZ (United Kingdom)
2014-02-18
Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.
Andrew J Hearn
Full Text Available The marbled cat Pardofelis marmorata is a poorly known wild cat that has a broad distribution across much of the Indomalayan ecorealm. This felid is thought to exist at low population densities throughout its range, yet no estimates of its abundance exist, hampering assessment of its conservation status. To investigate the distribution and abundance of marbled cats we conducted intensive, felid-focused camera trap surveys of eight forest areas and two oil palm plantations in Sabah, Malaysian Borneo. Study sites were broadly representative of the range of habitat types and the gradient of anthropogenic disturbance and fragmentation present in contemporary Sabah. We recorded marbled cats from all forest study areas apart from a small, relatively isolated forest patch, although photographic detection frequency varied greatly between areas. No marbled cats were recorded within the plantations, but a single individual was recorded walking along the forest/plantation boundary. We collected sufficient numbers of marbled cat photographic captures at three study areas to permit density estimation based on spatially explicit capture-recapture analyses. Estimates of population density from the primary, lowland Danum Valley Conservation Area and primary upland, Tawau Hills Park, were 19.57 (SD: 8.36 and 7.10 (SD: 1.90 individuals per 100 km2, respectively, and the selectively logged, lowland Tabin Wildlife Reserve yielded an estimated density of 10.45 (SD: 3.38 individuals per 100 km2. The low detection frequencies recorded in our other survey sites and from published studies elsewhere in its range, and the absence of previous density estimates for this felid suggest that our density estimates may be from the higher end of their abundance spectrum. We provide recommendations for future marbled cat survey approaches.
Sigrist, Mirna; Brusa, Lucila; Campagnoli, Darío; Beldoménico, Horacio
2012-10-15
An optimised FI-HGAAS method was applied to determine the total selenium concentrations in selected high consumption food (fish, beef, chicken, milk, rice, wheat flour, egg) and to estimate their contribution to the Argentinean dietary intake, whose information is scarce nowadays. Through several optimisation steps a suitable method was achieved showing satisfactory figures of merit for all matrices. Average recovery was 96%, RSDfish: 94-314; canned tuna: 272-282; eggs: 134-217), minor values were found for wheat flour (22-42), rice: (<22), pasta (47-64) and milk (<7-9). An estimated intake of 32 and 24 μg day(-1) for adult men and women, respectively, suggested a deficient Se intake, leading to further comprehensive surveys of Se occurrence in Argentina.
Ludewig, H.; Catalan-Lasheras, N.; Simos, N.; Walker, J.; Mallen, A.; Wei, J.; Todosow, M.
2000-06-30
The highest doses to components in the SNS ring are expected to be to those located in the collimation straight section. In this paper the authors present estimated doses to magnets and cable located between collimators. In addition the buildup of relatively long half-life radioactive isotopes is estimated, following machine operation and shutdown. Finally, the potential dose to operators approaching the machine following operation and shutdown for four hours is made. The results indicate that selected components might require replacement after several years of full power operation. In addition, the reflection of gamma-rays from the tunnel walls contribute a non-negligible amount to the dose of an operator in the tunnel following machine shutdown.
Estimation of a Trophic State Index for selected inland lakes in Michigan, 1999–2013
Fuller, Lori M.; Jodoin, Richard S.
2016-03-11
A 15-year estimated Trophic State Index (eTSI) for Michigan inland lakes is available, and it spans seven datasets, each representing 1 to 3 years of data from 1999 to 2013. On average, 3,000 inland lake eTSI values are represented in each of the datasets by a process that relates field-measured Secchi-disk transparency (SDT) to Landsat satellite imagery to provide eTSI values for unsampled inland lakes. The correlation between eTSI values and field-measured Trophic State Index (TSI) values from SDT was strong as shown by R2 values from 0.71 to 0.83. Mean eTSI values ranged from 42.7 to 46.8 units, which when converted to estimated SDT (eSDT) ranged from 8.9 to 12.5 feet for the datasets. Most eTSI values for Michigan inland lakes are in the mesotrophic TSI class. The Environmental Protection Agency (EPA) Level III Ecoregions were used to illustrate and compare the spatial distribution of eTSI classes for Michigan inland lakes. Lakes in the Northern Lakes and Forests, North Central Hardwood Forests, and Southern Michigan/Northern Indiana Drift Plains ecoregions are predominantly in the mesotrophic TSI class. The Huron/Erie Lake Plains and Eastern Corn Belt Plains ecoregions, had predominantly eutrophic class lakes and also the highest percent of hypereutrophic lakes than other ecoregions in the State. Data from multiple sampling programs—including data collected by volunteers with the Cooperative Lakes Monitoring Program (CLMP) through the Michigan Department of Environmental Quality (MDEQ), and the 2007 National Lakes Assessment (NLA)—were compiled to compare the distribution of lake TSI classes between each program. The seven eTSI datasets are available for viewing and download with eSDT from the Michigan Lake Water Clarity Interactive Map Viewer at http://mi.water.usgs.gov/projects/RemoteSensing/index.html.
Estimation of indoor and outdoor ratios of selected volatile organic compounds in Canada
Xu, Jing; Szyszkowicz, Mieczyslaw; Jovic, Branka; Cakmak, Sabit; Austin, Claire C.; Zhu, Jiping
2016-09-01
Indoor air and outdoor air concentration (I/O) ratio can be used to identify the origins of volatile organic compounds (VOCs). I/O ratios of 25 VOCs in Canada were estimated based on the data collected in various areas in Canada between September 2009 and December 2011. The indoor VOC data were extracted from the Canadian Health Measures Survey (CHMS). Outdoor VOC data were obtained from Canada's National Air Pollution Surveillance (NAPS) Network. The sampling locations covered nine areas in six provinces in Canada. Indoor air concentrations were found higher than outdoor air for all studied VOCs, except for carbon tetrachloride. Two different approaches were employed to estimate the I/O ratios; both approaches produced similar I/O values. The I/O ratios obtained from this study were similar to two other Canadian studies where indoor air and outdoor air of individual dwellings were measured. However, the I/O ratios found in Canada were higher than those in European cities and in two large USA cities, possibly due to the fact that the outdoor air concentrations recorded in the Canadian studies were lower. Possible source origins identified for the studied VOCs based on their I/O ratios were similar to those reported by others. In general, chlorinated hydrocarbons, short-chain (C5, C6) n-alkanes and benzene had significant outdoor sources, while long-chain (C10sbnd C12) n-alkanes, terpenes, naphthalene and styrene had significant indoor sources. The remaining VOCs had mixed indoor and outdoor sources.
Okundamiya, Michael Stephen
2010-12-01
Full Text Available This study proposes a temperature-based model of monthly mean daily global solar radiation on horizontal surfaces for selected cities, representing the six geopolitical zones in N igeria. The modelling was based on linear regression theory and was computed using monthly mean daily data set for minimum and maximum ambient temperatures. The results of three statistical indicators: Mean Bias Error (MBE, Root Mean Square Error (RMSE and t-statistic (TS; performed on the model along with practical comparison of the estimated and observed data validate the excellent performance accuracy of the proposed model.
M. S. Okundamiya
2011-01-01
Full Text Available This study proposes a temperature-based model of monthly mean daily global solar radiation on horizontal surfaces for selected cities, representing the six geopolitical zones in Nigeria. The modelling was based on linear regression theory and was computed using monthly mean daily data set for minimum and maximum ambient temperatures. The results of three statistical indicators: Mean Bias Error (MBE, Root Mean Square Error (RMSE, and t-statistic (TS, performed on the model along with practical comparison of the estimated and observed data, validate the excellent performance accuracy of the proposed model.
Börnhorst, Claudia; Siani, Alfonso; Tornaritis, Michalis
2017-01-01
Introduction:This study aims to evaluate a potential selection effect caused by exclusion of children with non-identifiable infancy peak (IP) and adiposity rebound (AR) when estimating associations between age and BMI at IP and AR and later weight status. Subjects and methods: In 4 744 children...... with at least 4 repeated measurements of height and weight in the age interval from 0 to 8 years (37 998 measurements) participating in the IDEFICS/I.Family cohort study, fractional polynomial multi-level models were used to derive individual BMI trajectories. Based on these trajectories, age and BMI at IP...... for later weight status instead....
Liepe, Juliane; Kirk, Paul; Filippi, Sarah; Toni, Tina; Barnes, Chris P.; Stumpf, Michael P.H.
2016-01-01
As modeling becomes a more widespread practice in the life- and biomedical sciences, we require reliable tools to calibrate models against ever more complex and detailed data. Here we present an approximate Bayesian computation framework and software environment, ABC-SysBio, which enables parameter estimation and model selection in the Bayesian formalism using Sequential Monte-Carlo approaches. We outline the underlying rationale, discuss the computational and practical issues, and provide detailed guidance as to how the important tasks of parameter inference and model selection can be carried out in practice. Unlike other available packages, ABC-SysBio is highly suited for investigating in particular the challenging problem of fitting stochastic models to data. Although computationally expensive, the additional insights gained in the Bayesian formalism more than make up for this cost, especially in complex problems. PMID:24457334
Nicolas Heslot
Full Text Available Genome-wide molecular markers are often being used to evaluate genetic diversity in germplasm collections and for making genomic selections in breeding programs. To accurately predict phenotypes and assay genetic diversity, molecular markers should assay a representative sample of the polymorphisms in the population under study. Ascertainment bias arises when marker data is not obtained from a random sample of the polymorphisms in the population of interest. Genotyping-by-sequencing (GBS is rapidly emerging as a low-cost genotyping platform, even for the large, complex, and polyploid wheat (Triticum aestivum L. genome. With GBS, marker discovery and genotyping occur simultaneously, resulting in minimal ascertainment bias. The previous platform of choice for whole-genome genotyping in many species such as wheat was DArT (Diversity Array Technology and has formed the basis of most of our knowledge about cereals genetic diversity. This study compared GBS and DArT marker platforms for measuring genetic diversity and genomic selection (GS accuracy in elite U.S. soft winter wheat. From a set of 365 breeding lines, 38,412 single nucleotide polymorphism GBS markers were discovered and genotyped. The GBS SNPs gave a higher GS accuracy than 1,544 DArT markers on the same lines, despite 43.9% missing data. Using a bootstrap approach, we observed significantly more clustering of markers and ascertainment bias with DArT relative to GBS. The minor allele frequency distribution of GBS markers had a deficit of rare variants compared to DArT markers. Despite the ascertainment bias of the DArT markers, GS accuracy for three traits out of four was not significantly different when an equal number of markers were used for each platform. This suggests that the gain in accuracy observed using GBS compared to DArT markers was mainly due to a large increase in the number of markers available for the analysis.
Samuel J Clark
Full Text Available A recent study using Heckman-type selection models to adjust for non-response in the Zambia 2007 Demographic and Health Survey (DHS found a large correction in HIV prevalence for males. We aim to validate this finding, replicate the adjustment approach in other DHSs, apply the adjustment approach in an external empirical context, and assess the robustness of the technique to different adjustment approaches. We used 6 DHSs, and an HIV prevalence study from rural South Africa to validate and replicate the adjustment approach. We also developed an alternative, systematic model of selection processes and applied it to all surveys. We decomposed corrections from both approaches into rate change and age-structure change components. We are able to reproduce the adjustment approach for the 2007 Zambia DHS and derive results comparable with the original findings. We are able to replicate applying the approach in several other DHSs. The approach also yields reasonable adjustments for a survey in rural South Africa. The technique is relatively robust to how the adjustment approach is specified. The Heckman selection model is a useful tool for assessing the possibility and extent of selection bias in HIV prevalence estimates from sample surveys.
Heavy metals in selected edible vegetables and estimation of their daily intake in Sanandaj, Iran.
Maleki, Afshin; Zarasvand, Masoud Alasvand
2008-03-01
The levels of four different heavy metals [cadmium (Cd), lead (Pb), chromium (Cr) and copper (Cu)] were determined in various vegetables [leek (Allium ampeloprasum), sweet basil (Ocimum basilicum), parsley (Petroselinum crispum), garden cress (Lepidium sativum) and tarragon (Artemisia dracunculus)] cultivated around Sanandaj City. The contributions of the vegetables to the daily intake of heavy metals from vegetables were investigated. One hundred samples (20 samples per month) were collected for five months. Atomic absorption spectrometry was used to determine the concentrations of these metals in the vegetables. The average concentrations of each heavy metal regardless of the kind of vegetable for Pb, Cu, Cr and Cd were 13.60 +/- 2.27, 11.50 +/- 2.16, 7.90 +/- 1.05 and 0.31 +/- 0.17 mg/kg, respectively. Based on the above concentrations and the information of National Nutrition and Food Research Institute of Iran, the dietary intake of Pb, Cu, Cr and Cd through vegetable consumption was estimated at 2.96, 2.50, 1.72 and 0.07 mg/day, respectively. It is concluded that the vegetables grown in this region are a health hazard for human consumption.
Martin, Gary R.; Fowler, Kathleen K.; Arihood, Leslie D.
2016-09-06
Information on low-flow characteristics of streams is essential for the management of water resources. This report provides equations for estimating the 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and the harmonic-mean flow at ungaged, unregulated stream sites in Indiana. These equations were developed using the low-flow statistics and basin characteristics for 108 continuous-record streamgages in Indiana with at least 10 years of daily mean streamflow data through the 2011 climate year (April 1 through March 31). The equations were developed in cooperation with the Indiana Department of Environmental Management.Regression techniques were used to develop the equations for estimating low-flow frequency statistics and the harmonic-mean flows on the basis of drainage-basin characteristics. A geographic information system was used to measure basin characteristics for selected streamgages. A final set of 25 basin characteristics measured at all the streamgages were evaluated to choose the best predictors of the low-flow statistics.Logistic-regression equations applicable statewide are presented for estimating the probability that selected low-flow frequency statistics equal zero. These equations use the explanatory variables total drainage area, average transmissivity of the full thickness of the unconsolidated deposits within 1,000 feet of the stream network, and latitude of the basin outlet. The percentage of the streamgage low-flow statistics correctly classified as zero or nonzero using the logistic-regression equations ranged from 86.1 to 88.9 percent.Generalized-least-squares regression equations applicable statewide for estimating nonzero low-flow frequency statistics use total drainage area, the average hydraulic conductivity of the top 70 feet of unconsolidated deposits, the slope of the basin, and the index of permeability and thickness of the Quaternary surficial sediments as explanatory variables. The average standard error of
Ultra-Flat Galaxies Selected from RFGC Catalog. II. Orbital Estimates of Halo Masses
Karachentsev, I D; Kudrya, Yu N
2016-01-01
We used the Revised Flat Galaxy Catalog (RFGC) to select 817 ultra-flat (UF) edge-on disk galaxies with blue and red apparent axial ratios of $(a/b)_B > 10.0$ and $(a/b)_R > 8.5$. The sample covering the whole sky, except the Milky Way zone, contains 490 UF galaxies with measured radial velocities. Our inspection of the neighboring galaxies around them revealed only 30 companions with radial velocity difference of $\\mid\\Delta V\\mid<500$ km s$^{-1}$ inside the projected separation of $R_p < 250$ kpc. Wherein, the wider area around the UF galaxy within $R_p < 750$ kpc contains no other neighbors brighter than the UF galaxy itself in the same velocity span. The resulting sample galaxies mostly belong to the morphological types Sc, Scd, Sd. They have a moderate rotation velocity curve amplitude of about $120$ km s$^{-1}$ and a moderate K-band luminosity of about $10^{10}L_{\\odot}$. The median difference of radial velocities of their companions is $87$ km s$^{-1}$, yielding the median orbital mass estimat...
Comparative estimate of resistance to drought fro selected karstic aquifers in Bulgaria
Orehova Tatiana
2004-12-01
Full Text Available Effective management of water resources requires adequate knowledge of groundwater system including the influence of climate variability and climate change. The drought of 1982-1994 in Bulgaria has led to important decrease of springflow and lowering of water levels. Therefore, groundwater demonstrated its vulnerability to drought. The purpose of this paper is to determine relative resistance of selected aquifers in Bulgaria to a prolonged decrease of recharge to groundwater. The drought resistance indicator has been defined for some karstic aquifers based on the method proposed in report of BRGM. The data from National Hydrogeological Network located in the National Institute of Meteorology and Hydrology were processed. For the aim of this study, time-series of discharge for karstic springs were used. Stations with significant impact of human activity on groundwater were eliminated. The results show that most of studied aquifers in Bulgaria have moderate and weak resistance to the drought. They are vulnerable to droughts and need good management for effective use of groundwater resources.
Tjahjana, D. D. D. P.; Al-Masuun, I. K.; Gustiantono, A.
2016-03-01
This paper presents the characteristics of wind speed and wind energy potential in the Pandansimo Beach-Yogyakarta based on Weibull distribution analysis. Ten-min average time series wind-speed data for a period of 2 year, measured at a height 50 m, are used in this study. The continuously recorded wind speed data were averaged over 10 minutes and stored in data logger. The results showed that the annual mean wind speed at location is 6.249 m/s, while the annual mean power densities is 264 W/m². It was further shown that the mean annual value of the most probable wind speed is 5.5 m/s and the mean annual value of the wind speed carrying maximum energy is 9.608 m/s. The performance of selected commercial wind turbine models designed for electricity generation in the site was examined. The wind turbine with the highest value of capacity factor is VESTAS V-110 with 33.97% and can produce 5951.04 M Wh/year.
Antioxidant activity of selected phenols estimated by ABTS and FRAP methods
Izabela Biskup
2013-09-01
Full Text Available Introduction: Phenols are the most abundant compounds in nature. They are strong antioxidants. Too high level of free radicals leads to cell and tissue damage, which may cause asthma, Alzheimer disease, cancers, etc. Taking phenolics with the diet as supplements or natural medicines is important for homeostasis of the organism. Materials and methods: The ten most popular water soluble phenols were chosen for the experiment to investigate their antioxidant properties using ABTS radical scavenging capacity assay and ferric reducing antioxidant potential (FRAP assay. Results and discussion: Antioxidant properties of selected phenols in the ABTS test expressed as IC50 ranged from 4.332 μM to 852.713 μM (for gallic acid and 4- hydroxyphenylacetic acid respectively. Antioxidant properties in the FRAP test are expressed as μmol Fe2 /ml. All examined phenols reduced ferric ions at concentration 1.00 x 10-3 mg/ml. Both methods are very useful for determination of antioxidant capacity of water soluble phenols.
Morrow, Connie E.; Xue, Lihua; Manjunath, Sudha; Culbertson, Jan C.; Accornero, Veronica H.; Anthony, James C.; Bandstra, Emmalee S.
2016-01-01
This study estimated childhood risk of developing selected DSM-IV Disorders, including Attention-Deficit Hyperactivity Disorder (ADHD), Oppositional Defiant Disorder (ODD), and Separation Anxiety Disorder (SAD), in children with prenatal cocaine exposure (PCE). Children were enrolled prospectively at birth (n=476) with prenatal drug exposures documented by maternal interview, urine and meconium assays. Study participants included 400 African-American children from the birth cohort, 208 cocaine-exposed (CE) and 192 non-cocaine-exposed (NCE) who attended a 5-year follow-up assessment and whose caregiver completed the Computerized Diagnostic Interview Schedule for Children. Under a generalized linear model (logistic link), Fisher’s exact methods were used to estimate the CE-associated relative risk (RR) of these disorders. Results indicated a modest but statistically robust elevation of ADHD risk associated with increasing levels of PCE (pEstimated cumulative incidence proportions among CE children were 2.9% for ADHD (vs 3.1% NCE); 1.4% for SAD (vs 1.6% NCE); and 4.3% for ODD (vs 6.8% NCE). Findings offer suggestive evidence of increased risk of ADHD (but not ODD or SAD) in relation to an increasing gradient of PCE during gestation.
Kanerva Mari
2012-10-01
Full Text Available Abstract Background Knowledge of the burden of healthcare-associated infections (HAI and antibiotic resistance is important for resource allocation in infection control. Although national surveillance networks do not routinely cover all HAIs due to multidrug-resistant bacteria, estimates are nevertheless possible: in the EU, 25,000 patients die from such infections annually. We assessed the burden of HAIs due to multidrug-resistant bacteria in Finland in 2010. Methods By combining data from the National Infectious Disease Registry on the numbers of bacteremias caused by Staphylococcus aureus, Enterococcus faecium, Escherichia coli, Klebsiella pneumoniae, Enterobacter spp., Pseudomonas aeruginosa and Acinetobacter spp., and susceptibility data from the National Antimicrobial Resistance Network and the Finnish Hospital Infection Program, we assessed the numbers of healthcare-associated bacteremias due to selected multidrug-resistant bacteria. We estimated the number of pneumonias, surgical site and urinary tract infections by applying the ratio of these infections in the first national prevalence survey for HAI in 2005. Attributable HAI mortality (3.2% was also derived from the prevalence survey. Results The estimated annual number of the most common HAIs due to the selected multidrug-resistant bacteria was 2804 (530 HAIs per million, 6% of all HAIs in Finnish acute care hospitals. The number of attributable deaths was 89 (18 per million. Conclusions Resources for infection control should be allocated not only in screening and isolation of carriers of multidrug-resistant bacteria, even when they are causing a small proportion of all HAIs, but also in preventing all clinical infections.
Boulesteix Anne-Laure
2009-12-01
Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.
M. Theau.Clément
2015-09-01
Full Text Available Sexual receptivity of rabbit does at insemination greatly influences fertility and is generally induced by hormones or techniques known as “biostimulation”. Searching for more sustainable farming systems, an original alternative would be to utilise the genetic pathway to increase the does’receptivity. The purpose of the present study was to identify genetic and non-genetic factors that influence rabbit doe sexual receptivity, in the context of a divergent selection experiment over 1 generation. The experiment spanned 2 generations: the founder generation (G0 consisting of 140 rabbit does, and the G1 generation comprising 2 divergently selected lines (L and H lines with 70 does each and 2 successive batches from each generation. The selection rate of the G0 females to form the G1 lines was 24/140. The selection tests consisted of 16 to 18 successive receptivity tests at the rate of 3 tests per week. On the basis of 4716 tests from 275 females, the average receptivity was 56.6±48.2%. A batch effect and a test operator effect were revealed. The contribution of females to the total variance was 20.0%, whereas that of bucks was only 1.1%. Throughout the experiment, 18.2% of does expressed a low receptivity (< 34%, 50.7% a medium one and 33.1% a high one (>66%. Some does were frequently receptive, whereas others were rarely receptive. The repeatability of sexual receptivity was approximately 20%. The results confirmed the high variability of sexual receptivity of non-lactating rabbit does maintained without any biostimulation or hormonal treatment. A lack of selection response on receptivity was observed. Accordingly, the heritability of receptivity was estimated at 0.01±0.02 from an animal model and at 0.02±0.03 from a sire and dam model. The heritability of the average receptivity of a doe was calculated as 0.04. In agreement with the low estimated heritability, the heritability determined was no different from zero
... strengthens your heart and lungs. When you strength train with weights, you're using your muscles to ... see there are lots of different ways to train with weights. Try a few good basic routines ...
... en español Entrenamiento de la fuerza muscular Strength training is a vital part of a balanced exercise routine that includes aerobic activity and flexibility exercises. Regular aerobic exercise, such as running or ...
Jonsson, T.; Setzer, M.; Pope, John George;
2013-01-01
Estimation of fish stock size distributions from survey data requires knowledge about gear selectivity. However, selectivity models rest on assumptions that seldom are analyzed. Departures from these can lead to misinterpretations and biased management recommendations. Here, we use survey data...... on great Arctic char (Salvelinus umbla) to analyze how correcting for entanglement of fish and nonisometric growth might improve estimates of selectivity curves, and subsequently estimates of size distribution and age-specific mortality. Initial selectivity curves, using the entire data set, were wide...... and asymmetric, with poor model fits. Removing potentially nonmeshed fish had the greatest positive effect on model fit, resulting in much narrower and less asymmetric selection curves, while attempting to take nonisometric growth into account, by using girth rather than length, improved model fit...
Wood, Molly S.; Fosness, Ryan L.; Skinner, Kenneth D.; Veilleux, Andrea G.
2016-06-27
The U.S. Geological Survey, in cooperation with the Idaho Transportation Department, updated regional regression equations to estimate peak-flow statistics at ungaged sites on Idaho streams using recent streamflow (flow) data and new statistical techniques. Peak-flow statistics with 80-, 67-, 50-, 43-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities (1.25-, 1.50-, 2.00-, 2.33-, 5.00-, 10.0-, 25.0-, 50.0-, 100-, 200-, and 500-year recurrence intervals, respectively) were estimated for 192 streamgages in Idaho and bordering States with at least 10 years of annual peak-flow record through water year 2013. The streamgages were selected from drainage basins with little or no flow diversion or regulation. The peak-flow statistics were estimated by fitting a log-Pearson type III distribution to records of annual peak flows and applying two additional statistical methods: (1) the Expected Moments Algorithm to help describe uncertainty in annual peak flows and to better represent missing and historical record; and (2) the generalized Multiple Grubbs Beck Test to screen out potentially influential low outliers and to better fit the upper end of the peak-flow distribution. Additionally, a new regional skew was estimated for the Pacific Northwest and used to weight at-station skew at most streamgages. The streamgages were grouped into six regions (numbered 1_2, 3, 4, 5, 6_8, and 7, to maintain consistency in region numbering with a previous study), and the estimated peak-flow statistics were related to basin and climatic characteristics to develop regional regression equations using a generalized least squares procedure. Four out of 24 evaluated basin and climatic characteristics were selected for use in the final regional peak-flow regression equations.Overall, the standard error of prediction for the regional peak-flow regression equations ranged from 22 to 132 percent. Among all regions, regression model fit was best for region 4 in west
Lineweaver, C H
2000-01-01
Planets like the Earth cannot form unless elements heavier than helium are available. These heavy elements, or `metals', were not produced in the big bang. They result from fusion inside stars and have been gradually building up over the lifetime of the Universe. Recent observations indicate that the presence of giant extrasolar planets at small distances from their host stars, is strongly correlated with high metallicity of the host stars. The presence of these close-orbiting giants is incompatible with the existence of earth-like planets. Thus, there may be a Goldilocks selection effect: with too little metallicity, earths are unable to form for lack of material, with too much metallicity giant planets destroy earths. Here I quantify these effects and obtain the probability, as a function of metallicity, for a stellar system to harbour an earth-like planet. I combine this probability with current estimates of the star formation rate and of the gradual build up of metals in the Universe to obtain an estimate...
Iwegbue, Chukwujindu M A
2015-03-01
The concentrations of metals (Cd, Pb, Ni, Cr, Cu, Co, Fe, Mn, and Zn) were determined in selected brands of canned mackerel, sardine, and tuna in Nigeria with a view to providing information on the dietary intakes of metals and lifelong health hazards associated with the consumption of these products. The concentrations of metals were determined by using atomic absorption spectrometry after acid digestion. The mean concentrations of metals in canned mackerel, sardine, and tuna were found as 0.04-0.58, 0.06-0.44, 0.32-0.83 μg/g for Cd; 0.05-2.82, 0.70-2.98, 0.23-2.56 μg/g for Pb, 1.33-11.33, canned fish were above their permissible limits while other metals occurred at levels below their permissible limits. The estimated daily intakes of metals from consumption of 20.8 g fish per day by a 60 kg body weight adult were below the provisional tolerable daily intakes for Cd, Pb, Ni, Cr, and Cu and recommended daily intakes for Co, Fe, Mn, and Zn. The estimated target hazard quotients of the examined metals were less than 1 in the majority of the samples indicating no long-term health hazard at the present circumstance.
Sigrist, Mirna; Hilbe, Nandi; Brusa, Lucila; Campagnoli, Darío; Beldoménico, Horacio
2016-11-01
An optimized flow injection hydride generation atomic absorption spectroscopy (FI-HGAAS) method was used to determine total arsenic in selected food samples (beef, chicken, fish, milk, cheese, egg, rice, rice-based products, wheat flour, corn flour, oats, breakfast cereals, legumes and potatoes) and to estimate their contributions to inorganic arsenic dietary intake. The limit of detection (LOD) and limit of quantification (LOQ) values obtained were 6μgkg(-)(1) and 18μgkg(-)(1), respectively. The mean recovery range obtained for all food at a fortification level of 200μgkg(-)(1) was 85-110%. Accuracy was evaluated using dogfish liver certified reference material (DOLT-3 NRC) for trace metals. The highest total arsenic concentrations (in μgkg(-)(1)) were found in fish (152-439), rice (87-316) and rice-based products (52-201). The contribution to inorganic arsenic (i-As) intake was calculated from the mean i-As content of each food (calculated by applying conversion factors to total arsenic data) and the mean consumption per day. The primary contributors to inorganic arsenic intake were wheat flour, including its proportion in wheat flour-based products (breads, pasta and cookies), followed by rice; both foods account for close to 53% and 17% of the intake, respectively. The i-As dietary intake, estimated as 10.7μgday(-)(1), was significantly lower than that from drinking water in vast regions of Argentina.
Wood, Molly S.; Fosness, Ryan L.; Skinner, Kenneth D.; Veilleux, Andrea G.
2016-06-27
The U.S. Geological Survey, in cooperation with the Idaho Transportation Department, updated regional regression equations to estimate peak-flow statistics at ungaged sites on Idaho streams using recent streamflow (flow) data and new statistical techniques. Peak-flow statistics with 80-, 67-, 50-, 43-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities (1.25-, 1.50-, 2.00-, 2.33-, 5.00-, 10.0-, 25.0-, 50.0-, 100-, 200-, and 500-year recurrence intervals, respectively) were estimated for 192 streamgages in Idaho and bordering States with at least 10 years of annual peak-flow record through water year 2013. The streamgages were selected from drainage basins with little or no flow diversion or regulation. The peak-flow statistics were estimated by fitting a log-Pearson type III distribution to records of annual peak flows and applying two additional statistical methods: (1) the Expected Moments Algorithm to help describe uncertainty in annual peak flows and to better represent missing and historical record; and (2) the generalized Multiple Grubbs Beck Test to screen out potentially influential low outliers and to better fit the upper end of the peak-flow distribution. Additionally, a new regional skew was estimated for the Pacific Northwest and used to weight at-station skew at most streamgages. The streamgages were grouped into six regions (numbered 1_2, 3, 4, 5, 6_8, and 7, to maintain consistency in region numbering with a previous study), and the estimated peak-flow statistics were related to basin and climatic characteristics to develop regional regression equations using a generalized least squares procedure. Four out of 24 evaluated basin and climatic characteristics were selected for use in the final regional peak-flow regression equations.Overall, the standard error of prediction for the regional peak-flow regression equations ranged from 22 to 132 percent. Among all regions, regression model fit was best for region 4 in west
Molins-Delgado, Daniel; Gago-Ferrero, Pablo; Díaz-Cruz, M Silvia; Barceló, Damià
2016-02-01
The hazardous potential of organic UV filters (UV-Fs) is becoming an issue of great concern due to the widespread application of these compounds in most daily-use goods, such as hygiene and beauty products. Nanomaterials (NMs) have also been used in personal care products (PCPs) for many years. Nowadays, both classes of chemicals are considered environmental emerging contaminants. Despite some studies performed in vitro and in vivo reported adverse effects of many UV-Fs on the normal development of organisms, there is scarce data regarding acute and chronic toxicity. The aim of the present study was to determine the EC50 values of selected UV-Fs using standardised toxicity assays on three aquatic species i.e. Daphnia magna, Raphidocelis subcapitata and Vibrio fischeri. EC50 values obtained were in the mgl(-1) range for all the species. The estimated toxicity data allowed us to assess the environmental risk posed by selected UV-Fs in urban groundwater from Barcelona (Spain). The calculated ecological risk indicated a negligible impact on the aquifer. Giving the increasing importance of studying mixtures of pollutants and due to the widespread presence of nanomaterials (NMs) in the aquatic environment, other objective of this work was to explore the response on D. magna after exposure to both binary combinations of UV-Fs among them and UV-F with NMs. In all cases but the nano-silver mixtures, joint toxicity was mitigated or even eradicated. Copyright © 2015 Elsevier Inc. All rights reserved.
Rattray, Gordon W
2017-12-01
The Camas National Wildlife Refuge (Refuge) in eastern Idaho, established in 1937, contains wetlands, ponds, and wet meadows that are essential resting and feeding habitat for migratory birds and nesting habitat for waterfowl. Initially, natural sources of water supported these habitats. However, during the past few decades, changes in climate and surrounding land use have altered and reduced natural groundwater and surface-water inflows, resulting in a 5-meter decline in the water table and an earlier, and more frequent, occurrence of no flow in Camas Creek at the Refuge. Due to these changes in water availability, water management that includes extensive groundwater pumping is now necessary to maintain the wetlands, ponds, and wet meadows. These water management activities have proven to be inefficient and expensive, and the Refuge is seeking alternative water-management options that are more efficient and less expensive. More efficient water management at the Refuge may be possible through knowledge of the seepage rates from ditches, ponds, and lakes at the Refuge. With this knowledge, water-management efficiency may be improved by natural means through selective use of water bodies with the smallest seepage rates or through engineering efforts to minimize seepage losses from water bodies with the largest seepage rates. The U.S. Geological Survey performed field studies in 2015 and 2016 to estimate seepage rates for selected ditches, ponds, and lakes at the Refuge. Estimated seepage rates from ponds and lakes ranged over an order of magnitude, from 3.4 ± 0.2 to 103.0 ± 0.5 mm/d, with larger seepage rates calculated for Big Pond and Redhead Pond, intermediate seepage rates calculated for Two-way Pond, and smaller seepages rates calculated for the south arm of Sandhole Lake. Estimated seepage losses from two reaches of Main Diversion Ditch were 21 ± 2 and 17 ± 2 percent/km. These losses represent seepage rates of about 890 and 860 mm/d, which are one
Rattray, Gordon W.
2017-01-01
The Camas National Wildlife Refuge (Refuge) in eastern Idaho, established in 1937, contains wetlands, ponds, and wet meadows that are essential resting and feeding habitat for migratory birds and nesting habitat for waterfowl. Initially, natural sources of water supported these habitats. However, during the past few decades, changes in climate and surrounding land use have altered and reduced natural groundwater and surface-water inflows, resulting in a 5-meter decline in the water table and an earlier, and more frequent, occurrence of no flow in Camas Creek at the Refuge. Due to these changes in water availability, water management that includes extensive groundwater pumping is now necessary to maintain the wetlands, ponds, and wet meadows.These water management activities have proven to be inefficient and expensive, and the Refuge is seeking alternative water-management options that are more efficient and less expensive. More efficient water management at the Refuge may be possible through knowledge of the seepage rates from ditches, ponds, and lakes at the Refuge. With this knowledge, water-management efficiency may be improved by natural means through selective use of water bodies with the smallest seepage rates or through engineering efforts to minimize seepage losses from water bodies with the largest seepage rates.The U.S. Geological Survey performed field studies in 2015 and 2016 to estimate seepage rates for selected ditches, ponds, and lakes at the Refuge. Estimated seepage rates from ponds and lakes ranged over an order of magnitude, from 3.4 ± 0.2 to 103.0 ± 0.5 mm/d, with larger seepage rates calculated for Big Pond and Redhead Pond, intermediate seepage rates calculated for Two-way Pond, and smaller seepages rates calculated for the south arm of Sandhole Lake. Estimated seepage losses from two reaches of Main Diversion Ditch were 21 ± 2 and 17 ± 2 percent/km. These losses represent seepage rates of about 890 and 860 mm/d, which are one
Hansen, Stinus; Gudex, Claire; Ahrberg, Fabian;
2014-01-01
by finite element analysis (FEA) at the distal radius and tibia to assess bone characteristics beyond BMD that may contribute to the increased risk of fracture. Thirty-three Caucasian women with SLE (median age 48, range 21-64 years) and 99 controls (median age 45, range 21-64 years) were studied. Groups...... were comparable in radius regarding geometry and vBMD, but SLE patients had lower trabecular number (-7 %, p FEA-estimated failure load compared to controls (-10 %, p ....01], trabecular number (-9 %, p FEA-estimated bone...
Luttrell, K. M.; Tong, X.; Sandwell, D. T.; Brooks, B. A.
2010-12-01
The great February 27, 2010 Mw 8.8 earthquake off the coast of southern Chile ruptured a 606 km length of subduction zone. In this study we make two independent estimates of shear stress in the crust in the region of the Chile earthquake. First, we use a coseismic slip model constrained by geodetic observations from InSAR and GPS to derive a spatially variable estimate of the change in static shear stress along the ruptured fault. Second, we use a static force balance model to constrain the crustal shear stress required to support observed accretionary wedge topography and the stress orientation indicated by the earthquake focal mechanism. This includes the derivation of a semi-analytic solution for the stress field exerted by surface and Moho topography loading the crust. We find that the deviatoric stress exerted by topography is minimized in the limit when the crust is considered an incompressible elastic solid, with a Poisson’s ratio of 0.5. This places a lower bound on the critical stress state maintained by the crust supporting plastically deformed accretionary wedge topography. We estimate the shear stress change from the Maule event ranged from -6 MPa (stress increase) to 14 MPa (stress drop), with a maximum depth-averaged shear stress drop of 4 MPa. We separately estimate that the plate driving forces acting in the region, regardless of their exact mechanism, must contribute at least 15 MPa trench-parallel compression, and trench-perpendicular compression must exceed trench-parallel compression by at least 12 MPa. This corresponds to a depth-averaged shear stress of at least 7 MPa. The comparable magnitude of these two independent shear stress estimates is consistent with the interpretation that the section of the megathrust fault ruptured in the Maule earthquake is weak, with the seismic cycle relieving much of the total sustained shear stress in the crust, and an equal portion of plate-driving stress being transmitted through the mantle.
Chopra, Shruti; Motwani, Sanjay K.; Ahmad, Farhan J.; Khar, Roop K.
2007-11-01
Simple, accurate, reproducible, selective, sensitive and cost effective UV-spectrophotometric methods were developed and validated for the estimation of trigonelline in bulk and pharmaceutical formulations. Trigonelline was estimated at 265 nm in deionised water and at 264 nm in phosphate buffer (pH 4.5). Beer's law was obeyed in the concentration ranges of 1-20 μg mL -1 ( r2 = 0.9999) in deionised water and 1-24 μg mL -1 ( r2 = 0.9999) in the phosphate buffer medium. The apparent molar absorptivity and Sandell's sensitivity coefficient were found to be 4.04 × 10 3 L mol -1 cm -1 and 0.0422 μg cm -2/0.001A in deionised water; and 3.05 × 10 3 L mol -1 cm -1 and 0.0567 μg cm -2/0.001A in phosphate buffer media, respectively. These methods were tested and validated for various parameters according to ICH guidelines. The detection and quantitation limits were found to be 0.12 and 0.37 μg mL -1 in deionised water and 0.13 and 0.40 μg mL -1 in phosphate buffer medium, respectively. The proposed methods were successfully applied for the determination of trigonelline in pharmaceutical formulations (vaginal tablets and bioadhesive vaginal gels). The results demonstrated that the procedure is accurate, precise, specific and reproducible (percent relative standard deviation dosage forms and dissolution studies.
Bents, David J.; Lu, Cheng Y.
1989-01-01
Solar Photo Voltaic (PV) and thermal dynamic power systems for application to selected Low Earth Orbit (LEO) and High Eccentric Orbit (Energy) (HEO) missions are characterized in the regime 7 to 35 kWe. Input parameters to the characterization are varied corresponding to anticipated introduction of improved or new technologies. Comparative assessment is made between the two power system types utilizing newly emerging technologies in cells and arrays, energy storage, optical surfaces, heat engines, thermal energy storage, and thermal management. The assessment is made to common ground rules and assumptions. The four missions (space station, sun-synchronous, Van Allen belt and GEO) are representative of the anticipated range of multi-kWe earth orbit missions. System characterizations include all required subsystems, including power conditioning, cabling, structure, to deliver electrical power to the user. Performance is estimated on the basis of three different levels of component technology: (1) state-of-art, (2) near-term, and (3) advanced technologies. These range from planar array silicon/IPV nickel hydrogen batteries and Brayton systems at 1000 K to thin film GaAs with high energy density secondary batteries or regenerative fuel cells and 1300 K Stirling systems with ultra-lightweight concentrators and radiators. The system estimates include design margin for performance degradations from the known environmental mechanisms (micrometeoroids and space debris, atomic oxygen, electron and proton flux) which are modeled and applied depending on the mission. The results give expected performance, mass and drag of multi-kWe earth orbiting solar power systems and show how overall system figures of merit will improve as new component technologies are incorporated.
Debebe, Senait A.; Franquiz, Juan; McGoron, Anthony J.
2015-03-01
Selective Internal Radiation Therapy (SIRT) is a common way to treat liver cancer that cannot be treated surgically. SIRT involves administration of Yttrium - 90 (90Y) microspheres via the hepatic artery after a diagnostic procedure using 99mTechnetium (Tc)-macroaggregated albumin (MAA) to detect extrahepatic shunting to the lung or the gastrointestinal tract. Accurate quantification of radionuclide administered to patients and radiation dose absorbed by different organs is of importance in SIRT. Accurate dosimetry for SIRT allows optimization of dose delivery to the target tumor and may allow for the ability to assess the efficacy of the treatment. In this study, we proposed a method that can efficiently estimate radiation absorbed dose from 90Y bremsstrahlung SPECT/CT images of liver and the surrounding organs. Bremsstrahlung radiation from 90Y was simulated using the Compton window of 99mTc (78keV at 57%). 99mTc images acquired at the photopeak energy window were used as a standard to examine the accuracy of dosimetry prediction by the simulated bremsstrahlung images. A Liqui-Phil abdominal phantom with liver, stomach and two tumor inserts was imaged using a Philips SPECT/CT scanner. The Dose Point Kernel convolution method was used to find the radiation absorbed dose at a voxel level for a three dimensional dose distribution. This method will allow for a complete estimate of the distribution of radiation absorbed dose by tumors, liver, stomach and other surrounding organs at the voxel level. The method provides a quantitative predictive method for SIRT treatment outcome and administered dose response for patients who undergo the treatment.
Šolínová, Veronika; Mikysková, Hana; Kaiser, Martin Maxmilián; Janeba, Zlatko; Holý, Antonín; Kašička, Václav
2016-01-01
Affinity capillary electrophoresis (ACE) has been applied to estimation of apparent binding constant of complexes of (R,S)-enantiomers of selected acyclic nucleoside phosphonates (ANPs) with chiral selector β-cyclodextrin (βCD) in aqueous alkaline medium. The noncovalent interactions of five pairs of (R,S)-enantiomers of ANPs-based antiviral drugs and their derivatives with βCD were investigated in the background electrolyte (BGE) composed of 35 or 50 mM sodium tetraborate, pH 10.0, and containing variable concentration (0-25 mM) of βCD. The apparent binding constants of the complexes of (R,S)-enantiomers of ANPs with βCD were estimated from the dependence of effective electrophoretic mobilities of (R,S)-enantiomers of ANPs (measured simultaneously by ACE at constant reference temperature 25°C inside the capillary) on the concentration of βCD in the BGE using different nonlinear and linear calculation methodologies. Nonlinear regression analysis provided more precise and accurate values of the binding constants and a higher correlation coefficient as compared to the regression analysis of the three linearized plots of the effective mobility dependence on βCD concentration in the BGE. The complexes of (R,S)-enantiomers of ANPs with βCD have been found to be relatively weak - their apparent binding constants determined by the nonlinear regression analysis were in the range 13.3-46.4 L/mol whereas the values from the linearized plots spanned the interval 12.3-55.2 L/mol.
Zwald, N R; Weigel, K A; Chang, Y M; Welper, R D; Clay, J S
2004-12-01
The objective of this study was to determine the feasibility of genetic selection for health traits in dairy cattle using data recorded in on-farm herd management software programs. Data regarding displaced abomasum (DA), ketosis (KET), mastitis (MAST), lameness (LAME), cystic ovaries (CYST), and metritis (MET) were collected between January 1, 2001 and December 31, 2003 in herds using Dairy Comp 305, DHI-Plus, or PCDART herd management software programs. All herds in this study were either participants in the Alta Genetics (Watertown, WI) Advantage progeny testing program or customers of the Dairy Records Management Systems (Raleigh, NC) processing center. Minimum lactation incidence rates were applied to ensure adequate reporting of these disorders within individual herds. After editing, DA, KET, MAST, LAME, CYST, and MET data from 75,252 (313), 52,898 (250), 105,029 (429), 50,611 (212), 65,080 (340), and 97,318 (418) cows (herds) remained for analysis. Average lactation incidence rates were 0.03, 0.10, 0.20, 0.10, 0.08, and 0.21 for DA, KET, MAST, LAME, CYST, and MET (including retained placenta), respectively. Data for each disorder were analyzed separately using a threshold sire model that included a fixed parity effect and random sire and herd-year-season of calving effects; both first lactation and all lactation analyses were carried out. Heritability estimates from first lactation (all lactation) analyses were 0.18 (0.15) for DA, 0.11 (0.06) for KET, 0.10 (0.09) for MAST, 0.07 (0.06) for LAME, 0.08 (0.05) for CYST, and 0.08 (0.07) for MET. Corresponding heritability estimates for the pooled incidence rate of all diseases between calving and 50 d postpartum were 0.12 and 0.10 for the first and all lactation analyses, respectively. Mean differences in PTA for probability of disease between the 10 best and 10 worst sires were 0.034 for DA, 0.069 for KET, 0.130 for MAST, 0.054 for LAME, 0.039 for CYST, and 0.120 for MET. Based on the results of this study, it
Weisskopf, Marc G; Sparrow, David; Hu, Howard; Power, Melinda C
2015-11-01
The process of creating a cohort or cohort substudy may induce misleading exposure-health effect associations through collider stratification bias (i.e., selection bias) or bias due to conditioning on an intermediate. Studies of environmental risk factors may be at particular risk. We aimed to demonstrate how such biases of the exposure-health effect association arise and how one may mitigate them. We used directed acyclic graphs and the example of bone lead and mortality (all-cause, cardiovascular, and ischemic heart disease) among 835 white men in the Normative Aging Study (NAS) to illustrate potential bias related to recruitment into the NAS and the bone lead substudy. We then applied methods (adjustment, restriction, and inverse probability of attrition weighting) to mitigate these biases in analyses using Cox proportional hazards models to estimate adjusted hazard ratios (HRs) and 95% confidence intervals (CIs). Analyses adjusted for age at bone lead measurement, smoking, and education among all men found HRs (95% CI) for the highest versus lowest tertile of patella lead of 1.34 (0.90, 2.00), 1.46 (0.86, 2.48), and 2.01 (0.86, 4.68) for all-cause, cardiovascular, and ischemic heart disease mortality, respectively. After applying methods to mitigate the biases, the HR (95% CI) among the 637 men analyzed were 1.86 (1.12, 3.09), 2.47 (1.23, 4.96), and 5.20 (1.61, 16.8), respectively. Careful attention to the underlying structure of the observed data is critical to identifying potential biases and methods to mitigate them. Understanding factors that influence initial study participation and study loss to follow-up is critical. Recruitment of population-based samples and enrolling participants at a younger age, before the potential onset of exposure-related health effects, can help reduce these potential pitfalls. Weisskopf MG, Sparrow D, Hu H, Power MC. 2015. Biased exposure-health effect estimates from selection in cohort studies: are environmental studies at
Boney Wera
2015-07-01
Full Text Available Successful crop breeding program incorporating agronomic and consumer preferred traits can be achieved by recognizing the existence and degree of variability among sweetpotato (Ipomoea batatas, (L. Lam. genotypes. Understanding genetic variability, genotypic and phenotypic correlation and inheritance among agronomic traits is fundamental to improvement of any crop. The study was carried out with the objective to estimate the genotypic variability and other yield related traits of highlands sweetpotato in Papua New Guinea in a polycross population. A total of 8 genotypes of sweetpotato derived from the polycross were considered in two cycles of replicated field experiments. Analysis of Variance was computed to contrast the variability within the selected genotypes based on high yielding β-carotene rich orange-fleshed sweetpotato. The results revealed significant differences among the genotypes. Genotypic coefficient of variation (GCV % was lower than phenotypic coefficient of variation (PCV % for all traits studied. Relatively high genetic variance, along with high heritability and expected genetic advances were observed in NMTN and ABYield. Harvest index (HI, scab and gall mite damage scores had heritability of 67%, 66% and 37% respectively. Marketable tuber yield (MTYield and total tuber yield (TTYield had lower genetic variance, low heritability and low genetic advance. There is need to investigate correlated inheritance among these traits. Selecting directly for yield improvement in polycross population may not be very efficient as indicated by the results. Therefore, it can be conclude that the variability within sweetpotato genotypes collected from polycross population in Aiyura Research Station for tuber yield is low and the extent of its yield improvement is narrow.
Lee, H.; Haimson, B.
2007-12-01
Salinian granodiorite core from the 1462-1470m segment of the SAFOD drillhole was used to derive its critical mechanical properties under true triaxial stress conditions, analyze shear localization and brittle fracture characteristics, and establish the strength criterion under dry conditions (Eos Trans. AGU, 87/52, Abstract T32C- 03). Here we report on a series of true triaxial tests on 'unjacketed' specimens simulating stress conditions prevailing at the drillhole wall and responsible for borehole failure in the form of breakouts. Owing to numerous random cracks inherent in the core, only 11 rectangular prismatic specimens (19×19×38 mm3) were successfully tested, employing the University of Wisconsin polyaxial cell. The two larger principal stresses, σ1 and σ2, were transmitted through metal pistons, while σ3 was applied by confining fluid pressure. Specimen sides facing σ3 were left 'unjacketed', i.e. in direct contact with the confining fluid, to simulate the condition of drilling-mud pressure applying the principal radial stress (σ3) to the exposed borehole wall. The loading path called for first bringing σ2 and σ3 to preset levels and then increasing σ1 at a constant strain rate (5x10-6/sec) until brittle failure occurred. Invariably, failure occurred at σ1 levels that were only about half as high as those in previously tested dry samples under the same σ2 and σ3 magnitudes. Instead of a shear fracture, or fault, steeply inclined in the direction of σ3, as previously observed in the dry specimens, brittle failure took the form of a localized cluster of through-going extensile cracks parallel and adjacent to the faces subjected to σ3. Since failure occurred at σ1 values close to those at dilatancy onset in dry specimens, we infer that as soon as microcracks reopened, confining fluid rushed into those daylighting at the σ3 faces and extended them along a path of least resistance, i.e. along a plane normal to σ3. Thus brittle failure under
Seo, Youngseob; Wang, Zhiyue J; Morriss, Michael C; Rollins, Nancy K
2012-10-01
Although it is known that low signal-to-noise ratio (SNR) can affect tensor metrics, few studies reporting disease or treatment effects on fractional anisotropy (FA) report SNR; the implicit assumption is that SNR is adequate. However, the level at which low SNR causes bias in FA may vary with tissue FA, field strength and analytical methodology. We determined the SNR thresholds at 1.5 T vs. 3 T in regions of white matter (WM) with different FA and compared FA derived using manual region-of-interest (ROI) analysis to tract-based spatial statistics (TBSS), an operator-independent whole-brain analysis tool. Using ROI analysis, SNR thresholds on our hardware-software magnetic resonance platforms were 25 at 1.5 T and 20 at 3 T in the callosal genu (CG), 40 at 1.5 and 3 T in the anterior corona radiata (ACR), and 50 at 1.5 T and 70 at 3 T in the putamen (PUT). Using TBSS, SNR thresholds were 20 at 1.5 T and 3 T in the CG, and 35 at 1.5 T and 40 at 3 T in the ACR. Below these thresholds, the mean FA increased logarithmically, and the standard deviations widened. Achieving bias-free SNR in the PUT required at least nine acquisitions at 1.5 T and six acquisitions at 3 T. In the CG and ACR, bias-free SNR was achieved with at least three acquisitions at 1.5 T and one acquisition at 3 T. Using diffusion tensor imaging (DTI) to study regions of low FA, e.g., basal ganglia, cerebral cortex, and WM in the abnormal brain, SNR should be documented. SNR thresholds below which FA is biased varied with the analytical technique, inherent tissue FA and field strength. Studies using DTI to study WM injury should document that bias-free SNR has been achieved in the region of the brain being studied as part of quality control. Copyright © 2012 Elsevier Inc. All rights reserved.
2002-01-01
A hybrid experiment to observe directly particles with open beauty and estimate their lifetimes is proposed. The experiment will take place in a @p|- beam at 360 GeV/c. Events of the type @p|-N @A B$\\bar{B}$X will be produced in a thick emulsion, allowing for a lifetime range of 10|-|1|5~-~10|-|1|2~s. The decay vertices of B and $\\bar{B}$ and of the subsequent charm decays will be identified in emulsion. \\\\ \\\\ The precise location of the production vertex will be measured by high precision (50@mm~pitch) silicon microstrip detectors. A set of planes of such detectors will be placed in front of the target to measure the incoming beam particle, and another set of planes, together with 16~planes of MWPC's will be plac target to measure the secondaries. \\\\ \\\\ The semi-leptonic decays of B's and C's are used to create a selective trigger. The data taking will be triggered by l@m with an angle to the beam @a~$>$~30~mrad, or by~@$>$~2@m. Transverse momentum cuts will be applied off-line.\\\\ \\\\ The muons are identified...
Andrea Copeland
2014-05-01
Full Text Available A considerable amount of information, particularly in image form, is shared on the web through social networking sites. If any of this content is worthy of preservation, who decides what is to be preserved and based on what criteria. This paper explores the potential for public libraries to assume this role of community digital repositories through the creation of digital collections. Thirty public library users and thirty librarians were solicited from the Indianapolis metropolitan area to evaluate five images selected from Flickr in terms of their value to public library digital collections and their worthiness of long-term preservation. Using a seven-point Likert scale, participants assigned a value to each image in terms of its importance to self, family and society. Participants were then asked to explain the reasoning behind their valuations. Public library users and librarians had similar value estimations of the images in the study. This is perhaps the most significant finding of the study, given the importance of collaboration and forming partnerships for building and sustaining community collections and archives.
Corrado Dimauro
2010-01-01
Full Text Available Two methods of SNPs pre-selection based on single marker regression for the estimation of genomic breeding values (G-EBVs were compared using simulated data provided by the XII QTL-MAS workshop: i Bonferroni correction of the significance threshold and ii Permutation test to obtain the reference distribution of the null hypothesis and identify significant markers at P<0.01 and P<0.001 significance thresholds. From the set of markers significant at P<0.001, random subsets of 50% and 25% markers were extracted, to evaluate the effect of further reducing the number of significant SNPs on G-EBV predictions. The Bonferroni correction method allowed the identification of 595 significant SNPs that gave the best G-EBV accuracies in prediction generations (82.80%. The permutation methods gave slightly lower G-EBV accuracies even if a larger number of SNPs resulted significant (2,053 and 1,352 for 0.01 and 0.001 significance thresholds, respectively. Interestingly, halving or dividing by four the number of SNPs significant at P<0.001 resulted in an only slightly decrease of G-EBV accuracies. The genetic structure of the simulated population with few QTL carrying large effects, might have favoured the Bonferroni method.
Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin
2017-06-01
This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.
Scranton, Katherine; Lummaa, Virpi; Stearns, Stephen C
2016-08-01
Although fitness is central to the evolutionary process, metrics vary by timescale. Different timescales may give rise to different estimates of selection, especially during demographic transitions caused by rapid environmental and socioeconomic change. In this study, we used a dataset of a human population in Finland from 1775 to 1950 to compare two fitness metrics and their estimates of selection pressures, before and during a demographic transition. Both metrics, lifetime reproductive success and an annual metric of individual performance, declined while selection on the ages at first and last reproduction remained nearly constant, favouring individuals with wider reproductive windows. The ability to partition the annual metric into contributions from reproduction and survival revealed the short-term effects of a famine and the reversal of selection pressure via the survival component of annual fitness. Although the metrics generally agreed, the annual metric detected the effects of environmental variation and demographic change occurring within a generation. © 2016 John Wiley & Sons Ltd/CNRS.
Du, Lin; Shi, Shuo; Gong, Wei; Yang, Jian; Sun, Jia; Mao, Feiyue
2016-06-01
Hyperspectral LiDAR (HSL) is a novel tool in the field of active remote sensing, which has been widely used in many domains because of its advantageous ability of spectrum-gained. Especially in the precise monitoring of nitrogen in green plants, the HSL plays a dispensable role. The exiting HSL system used for nitrogen status monitoring has a multi-channel detector, which can improve the spectral resolution and receiving range, but maybe result in data redundancy, difficulty in system integration and high cost as well. Thus, it is necessary and urgent to pick out the nitrogen-sensitive feature wavelengths among the spectral range. The present study, aiming at solving this problem, assigns a feature weighting to each centre wavelength of HSL system by using matrix coefficient analysis and divergence threshold. The feature weighting is a criterion to amend the centre wavelength of the detector to accommodate different purpose, especially the estimation of leaf nitrogen content (LNC) in rice. By this way, the wavelengths high-correlated to the LNC can be ranked in a descending order, which are used to estimate rice LNC sequentially. In this paper, a HSL system which works based on a wide spectrum emission and a 32-channel detector is conducted to collect the reflectance spectra of rice leaf. These spectra collected by HSL cover a range of 538 nm - 910 nm with a resolution of 12 nm. These 32 wavelengths are strong absorbed by chlorophyll in green plant among this range. The relationship between the rice LNC and reflectance-based spectra is modeled using partial least squares (PLS) and support vector machines (SVMs) based on calibration and validation datasets respectively. The results indicate that I) wavelength selection method of HSL based on feature weighting is effective to choose the nitrogen-sensitive wavelengths, which can also be co-adapted with the hardware of HSL system friendly. II) The chosen wavelength has a high correlation with rice LNC which can be
Strupczewski, Witold G.; Bogdanowich, Ewa; Debele, Sisay
2016-04-01
Under Polish climate conditions the series of Annual Maxima (AM) flows are usually a mixture of peak flows of thaw- and rainfall- originated floods. The northern, lowland regions are dominated by snowmelt floods whilst in mountainous regions the proportion of rainfall floods is predominant. In many stations the majority of AM can be of snowmelt origin, but the greatest peak flows come from rainfall floods or vice versa. In a warming climate, precipitation is less likely to occur as snowfall. A shift from a snow- towards a rain-dominated regime results in a decreasing trend in mean and standard deviations of winter peak flows whilst rainfall floods do not exhibit any trace of non-stationarity. That is why a simple form of trends (i.e. linear trends) are more difficult to identify in AM time-series than in Seasonal Maxima (SM), usually winter season time-series. Hence it is recommended to analyse trends in SM, where a trend in standard deviation strongly influences the time -dependent upper quantiles. The uncertainty associated with the extrapolation of the trend makes it necessary to apply a relationship for trend which has time derivative tending to zero, e.g. we can assume a new climate equilibrium epoch approaching, or a time horizon is limited by the validity of the trend model. For both winter and summer SM time series, at least three distributions functions with trend model in the location, scale and shape parameters are estimated by means of the GAMLSS package using the ML-techniques. The resulting trend estimates in mean and standard deviation are mutually compared to the observed trends. Then, using AIC measures as weights, a multi-model distribution is constructed for each of two seasons separately. Further, assuming a mutual independence of the seasonal maxima, an AM model with time-dependent parameters can be obtained. The use of a multi-model approach can alleviate the effects of different and often contradictory trends obtained by using and identifying
Interaction intensity and pollinator-mediated selection.
Trunschke, Judith; Sletvold, Nina; Ågren, Jon
2017-02-27
In animal-pollinated plants, the opportunity for selection and the strength of pollinator-mediated selection are expected to increase with the degree of pollen limitation. However, whether differences in pollen limitation can explain variation in pollinator-mediated and net selection among animal-pollinated species is poorly understood. In the present study, we quantified pollen limitation, variance in relative fitness and pollinator-mediated selection on five traits important for pollinator attraction (flowering start, plant height, flower number, flower size) and pollination efficiency (spur length) in natural populations of 12 orchid species. Pollinator-mediated selection was quantified by subtracting estimates of selection gradients for plants receiving supplemental hand-pollination from estimates obtained for open-pollinated control plants. Mean pollen limitation ranged from zero to 0.96. Opportunity for selection, pollinator-mediated selection and net selection were all positively related to pollen limitation, whereas nonpollinator-mediated selection was not. Opportunity for selection varied five-fold, strength of pollinator-mediated selection varied three-fold and net selection varied 1.5-fold among species. Supplemental hand-pollination reduced both opportunity for selection and selection on floral traits. The results show that the intensity of biotic interactions is an important determinant of the selection regime, and indicate that the potential for pollinator-mediated selection and divergence in floral traits is particularly high in species that are strongly pollen-limited.
Y. Plancherel
2013-07-01
Full Text Available Quantifying oceanic anthropogenic carbon uptake by monitoring interior dissolved inorganic carbon (DIC concentrations is complicated by the influence of natural variability. The "eMLR method" aims to address this issue by using empirical regression fits of the data instead of the data themselves, inferring the change in anthropogenic carbon in time by difference between predictions generated by the regressions at each time. The advantages of the method are that it provides in principle a means to filter out natural variability, which theoretically becomes the regression residuals, and a way to deal with sparsely and unevenly distributed data. The degree to which these advantages are realized in practice is unclear, however. The ability of the eMLR method to recover the anthropogenic carbon signal is tested here using a global circulation and biogeochemistry model in which the true signal is known. Results show that regression model selection is particularly important when the observational network changes in time. When the observational network is fixed, the likelihood that co-located systematic misfits between the empirical model and the underlying, yet unknown, true model cancel is greater, improving eMLR results. Changing the observational network modifies how the spatio-temporal variance pattern is captured by the respective datasets, resulting in empirical models that are dynamically or regionally inconsistent, leading to systematic errors. In consequence, the use of regression formulae that change in time to represent systematically best-fit models at all times does not guarantee the best estimates of anthropogenic carbon change if the spatial distributions of the stations emphasize hydrographic features differently in time. Other factors, such as a balanced and representative station coverage, vertical continuity of the regression formulae consistent with the hydrographic context and resiliency of the spatial distribution of the residual
蒋德稳; 田安国; 胡杰; 赵政
2015-01-01
为了快速测定混凝土强度，将微波技术应用到传统的蒸压养护工艺中，自行设计和制作了小型微波蒸压釜。通过试验得出了釜腔内温度压力大小、恒温恒压养护时间对混凝土早期强度的影响规律，进而总结出微波蒸压养护条件下混凝土快速养护制度。通过15批次（5种配合比）不同强度等级混凝土的微波蒸压养护和标准养护对比试验，建立了微波蒸压养护条件下的混凝土强度线性和非线性回归模型，模型中考虑了加速养护混凝土强度和混凝土水灰比，具有较好的相关关系。试验分析发现：运用微波技术，可以均匀、迅速地提高混凝土内部温度，减小温度应力，使混凝土1 h内获得较高（28 d强度40％～50％）、稳定的早期强度，可以作为预拌混凝土生产企业和施工现场进行强度控制和配合比调整时的依据。%In order to estimate concrete strength quickly,microwave was applied to the conditional autoclave curing. Small microwave autoclave suitable for the microwave autoclaved were self designed and made. By experiment research,the effect laws of temperature in autoclave and constant temperature curing time on the early strength of concrete were investigated,and rapid curing system was summa-rized on the condition of microwave and autoclave. By the contrast test of microwave autoclave curing and standard curing of 15 batches of different strength grade concrete,the linear and nonlinear regression model of concrete strength determination were established with better correlativity on the condition of microwave and autoclave. Concrete strength accelerated of curing and water-cement-ratio was considered in the model. It was found that the internal temperature of concrete would be uniformly,rapidly improved by microwave,and the temperature stress would be reduced.The high and stable early strength of concrete(40~50 percent of 28 d strength)was obtained
Moorad, Jacob A
2013-06-01
Modernization has increased longevity and decreased fertility in many human populations, but it is not well understood how or to what extent these demographic transitions have altered patterns of natural selection. I integrate individual-based multivariate phenotypic selection approaches with evolutionary demographic methods to demonstrate how a demographic transition in 19th century female populations of Utah altered relationships between fitness and age-specific survival and fertility. Coincident with this demographic transition, natural selection for fitness, as measured by the opportunity for selection, increased by 13% to 20% over 65 years. Proportional contributions of age-specific survival to total selection (the complement to age-specific fertility) diminished from approximately one third to one seventh following a marked increase in infant survival. Despite dramatic reductions in age-specific fertility variance at all ages, the absolute magnitude of selection for fitness explained by age-specific fertility increased by approximately 45%. I show that increases in the adaptive potential of fertility traits followed directly from decreased population growth rates. These results suggest that this demographic transition has increased the adaptive potential of the Utah population, intensified selection for reproductive traits, and de-emphasized selection for survival-related traits. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.
Howe, Lauren C; Krosnick, Jon A
2017-01-03
Attitude strength has been the focus of a huge volume of research in psychology and related sciences for decades. The insights offered by this literature have tremendous value for understanding attitude functioning and structure and for the effective application of the attitude concept in applied settings. This is the first Annual Review of Psychology article on the topic, and it offers a review of theory and evidence regarding one of the most researched strength-related attitude features: attitude importance. Personal importance is attached to an attitude when the attitude is perceived to be relevant to self-interest, social identification with reference groups or reference individuals, and values. Attaching personal importance to an attitude causes crystallizing of attitudes (via enhanced resistance to change), effortful gathering and processing of relevant information, accumulation of a large store of well-organized relevant information in long-term memory, enhanced attitude extremity and accessibility, enhanced attitude impact on the regulation of interpersonal attraction, energizing of emotional reactions, and enhanced impact of attitudes on behavioral intentions and action. Thus, important attitudes are real and consequential psychological forces, and their study offers opportunities for addressing behavioral change.
Study Of Ceramic-Polymer Composites Reliability Based On The Bending Strength Test
Walczak Agata
2015-11-01
Full Text Available In this paper the reliability assessment of structural reliability of the selected light-cured dental composites based on the biaxial flexural strength test results has been presented. A two-parameter Weibull distribution was applied as a reliability model in order to estimate probability of strength maintenance in the analysed population. Weibull distribution parameters were interpreted as a characteristic material strength (scale parameter and structural reliability parameter in terms of ability to maintain strength by each of specimen from the general population (shape parameter. 20 composite specimens underwent strength tests, including 2 “flow” type composites and 2 standard composites (with typical filler content. “Flow” type composites were characterized with lower characteristic strength and higher structural reliability comparing to other studied composites.
2007-01-01
The present invention relates to an image analysis method for estimating the number or amount of objects in an image, for example the number of cancer cells in a tissue slice, where the image is partitioned into sectors and some of the sectors are selected for measuring of the objects in the sector...
Smith, S. Jerrod
2013-01-01
From the 1890s through the 1970s the Picher mining district in northeastern Ottawa County, Oklahoma, was the site of mining and processing of lead and zinc ore. When mining ceased in about 1979, as much as 165–300 million tons of mine tailings, locally referred to as “chat,” remained in the Picher mining district. Since 1979, some chat piles have been mined for aggregate materials and have decreased in volume and mass. Currently (2013), the land surface in the Picher mining district is covered by thousands of acres of chat, much of which remains on Indian trust land owned by allottees. The Bureau of Indian Affairs manages these allotted lands and oversees the sale and removal of chat from these properties. To help the Bureau of Indian Affairs better manage the sale and removal of chat, the U.S. Geological Survey, in cooperation with the Bureau of Indian Affairs, estimated the 2005 and 2010 volumes and masses of selected chat piles remaining on allotted lands in the Picher mining district. The U.S. Geological Survey also estimated the changes in volume and mass of these chat piles for the period 2005 through 2010. The 2005 and 2010 chat-pile volume and mass estimates were computed for 34 selected chat piles on 16 properties in the study area. All computations of volume and mass were performed on individual chat piles and on groups of chat piles in the same property. The Sooner property had the greatest estimated volume (4.644 million cubic yards) and mass (5.253 ± 0.473 million tons) of chat in 2010. Five of the selected properties (Sooner, Western, Lawyers, Skelton, and St. Joe) contained estimated chat volumes exceeding 1 million cubic yards and estimated chat masses exceeding 1 million tons in 2010. Four of the selected properties (Lucky Bill Humbah, Ta Mee Heh, Bird Dog, and St. Louis No. 6) contained estimated chat volumes of less than 0.1 million cubic yards and estimated chat masses of less than 0.1 million tons in 2010. The total volume of all
Hansen, Stinus; Brixen, Kim; Gravholt, Claus H
2012-08-01
Although bone mass appear ample for bone size in Turner syndrome (TS), epidemiological studies have reported an increased risk of fracture in TS. We used high-resolution peripheral quantitative computed tomography (HR-pQCT) to measure standard morphological parameters of bone geometry and microarchitecture, as well as estimated bone strength by finite element analysis (FEA) to assess bone characteristics beyond bone mineral density (BMD) that possibly contribute to the increased risk of fracture. Thirty-two TS patients (median age 35, range 20-61 years) and 32 healthy control subjects (median age 36, range 19-58 years) matched with the TS participants with respect to age and body-mass index were studied. A full region of interest (ROI) image analysis and a height-matched ROI analysis adjusting for differences in body height between groups were performed. Mean bone cross-sectional area was lower in TS patients in radius (-15%) and tibia (-13%) (both p radius (18%, p radius, -36% in tibia, both p radius, -22% in tibia, both p radius, -12% in tibia, both p radius, 23% in tibia, both p radius, and trabecular number in tibia. FEA estimated failure load was lower in TS patients in both radius (-11%) and tibia (-16%) (both p < 0.01) and remained significantly lower in the height-matched ROI analysis. Conclusively, TS patients had compromised trabecular microarchitecture and lower bone strength at both skeletal sites, which may partly account for the increased risk of fracture observed in these patients.
Aldworth, Zane N; Miller, John P; Gedeon, Tomás; Cummins, Graham I; Dimitrov, Alexander G
2005-06-01
What is the meaning associated with a single action potential in a neural spike train? The answer depends on the way the question is formulated. One general approach toward formulating this question involves estimating the average stimulus waveform preceding spikes in a spike train. Many different algorithms have been used to obtain such estimates, ranging from spike-triggered averaging of stimuli to correlation-based extraction of "stimulus-reconstruction" kernels or spatiotemporal receptive fields. We demonstrate that all of these approaches miscalculate the stimulus feature selectivity of a neuron. Their errors arise from the manner in which the stimulus waveforms are aligned to one another during the calculations. Specifically, the waveform segments are locked to the precise time of spike occurrence, ignoring the intrinsic "jitter" in the stimulus-to-spike latency. We present an algorithm that takes this jitter into account. "Dejittered" estimates of the feature selectivity of a neuron are more accurate (i.e., provide a better estimate of the mean waveform eliciting a spike) and more precise (i.e., have smaller variance around that waveform) than estimates obtained using standard techniques. Moreover, this approach yields an explicit measure of spike-timing precision. We applied this technique to study feature selectivity and spike-timing precision in two types of sensory interneurons in the cricket cercal system. The dejittered estimates of the mean stimulus waveforms preceding spikes were up to three times larger than estimates based on the standard techniques used in previous studies and had power that extended into higher-frequency ranges. Spike timing precision was approximately 5 ms.
Whitney, J. M.
1983-01-01
The notch strength of composites is discussed. The point stress and average stress criteria relate the notched strength of a laminate to the average strength of a relatively long tensile coupon. Tests of notched specimens in which microstrain gages have been placed at or near the edges of the holes have measured strains much larger that those measured in an unnotched tensile coupon. Orthotropic stress concentration analyses of failed notched laminates have also indicated that failure occurred at strains much larger than those experienced on tensile coupons with normal gage lengths. This suggests that the high strains at the edge of a hole can be related to the very short length of fiber subjected to these strains. Lockheed has attempted to correlate a series of tests of several laminates with holes ranging from 0.19 to 0.50 in. Although the average stress criterion correlated well with test results for hole sizes equal to or greater than 0.50 in., it over-estimated the laminate strength in the range of hole sizes from 0.19 to 0.38 in. It thus appears that a theory is needed that is based on the mechanics of failure and is more generally applicable to the range of hole sizes and the varieties of laminates found in aircraft construction.
黄耀英; 郑宏; 周宜红; 武先伟
2012-01-01
结合三维弹性徐变仿真计算公式分析了应变计组实测值转化为实际应力的计算公式,认为当前采用应变计组实测应变计算三维空间实际应力的公式不够完善,给出了理论严谨的应变计组测值转化三维空间实际应力计算公式.提出了利用小概率事件法估计大坝混凝土实际抗拉强度,结合混凝土大坝埋设的应变计组和无应力计实测值,采用小概率事件法初步探讨了大坝混凝土的实际抗拉强度和极限拉伸变形的估计,实践表明,当获得应变计组长时间测值系列,以及获得较多的应变计组的测值样本后,基于小概率事件法可以得到符合大坝混凝土实际情况的抗拉强度和极限拉伸变形.%The calculation formula of transforming the stress measured by strain gauge group into the actual one was analyzed in this paper by combining three dimensional (3D) elastic creep simulation calculation formula. It is believed the current 3D spatial actual stress formulas calculated by measured value of the strain gauge group are not perfect. Therefore, the theoretical rigorous formula of strain gauge group's measured value transforming to 3D spatial actual stress was provided in this paper. Furthermore, the way of estimating the concrete dam practical tensile strength through small probability events method was put forward, and the estimation of actual tensile strength and ultimate tensile strain was also it preliminarily studied by small probability event method. It is demonstrated by the engineering practices that when getting a long time strain gauge measurements series and more strain gauge group measured samples, it can obtain the actual situation of the tensile strength and ultimate tensile strain conform to the concrete dam based on small probability e-vent method.
Huizinga, Richard J.; Rydlund, Jr., Paul H.
2004-01-01
The evaluation of scour at bridges throughout the state of Missouri has been ongoing since 1991 in a cooperative effort by the U.S. Geological Survey and Missouri Department of Transportation. A variety of assessment methods have been used to identify bridges susceptible to scour and to estimate scour depths. A potential-scour assessment (Level 1) was used at 3,082 bridges to identify bridges that might be susceptible to scour. A rapid estimation method (Level 1+) was used to estimate contraction, pier, and abutment scour depths at 1,396 bridge sites to identify bridges that might be scour critical. A detailed hydraulic assessment (Level 2) was used to compute contraction, pier, and abutment scour depths at 398 bridges to determine which bridges are scour critical and would require further monitoring or application of scour countermeasures. The rapid estimation method (Level 1+) was designed to be a conservative estimator of scour depths compared to depths computed by a detailed hydraulic assessment (Level 2). Detailed hydraulic assessments were performed at 316 bridges that also had received a rapid estimation assessment, providing a broad data base to compare the two scour assessment methods. The scour depths computed by each of the two methods were compared for bridges that had similar discharges. For Missouri, the rapid estimation method (Level 1+) did not provide a reasonable conservative estimate of the detailed hydraulic assessment (Level 2) scour depths for contraction scour, but the discrepancy was the result of using different values for variables that were common to both of the assessment methods. The rapid estimation method (Level 1+) was a reasonable conservative estimator of the detailed hydraulic assessment (Level 2) scour depths for pier scour if the pier width is used for piers without footing exposure and the footing width is used for piers with footing exposure. Detailed hydraulic assessment (Level 2) scour depths were conservatively estimated by
Harwell, Glenn R.
2012-01-01
Organizations responsible for the management of water resources, such as the U.S. Army Corps of Engineers (USACE), are tasked with estimation of evaporation for water-budgeting and planning purposes. The USACE has historically used Class A pan evaporation data (pan data) to estimate evaporation from reservoirs but many USACE Districts have been experimenting with other techniques for an alternative to collecting pan data. The energy-budget method generally is considered the preferred method for accurate estimation of open-water evaporation from lakes and reservoirs. Complex equations to estimate evaporation, such as the Penman, DeBruin-Keijman, and Priestley-Taylor, perform well when compared with energy-budget method estimates when all of the important energy terms are included in the equations and ideal data are collected. However, sometimes nonideal data are collected and energy terms, such as the change in the amount of stored energy and advected energy, are not included in the equations. When this is done, the corresponding errors in evaporation estimates are not quantifiable. Much simpler methods, such as the Hamon method and a method developed by the U.S. Weather Bureau (USWB) (renamed the National Weather Service in 1970), have been shown to provide reasonable estimates of evaporation when compared to energy-budget method estimates. Data requirements for the Hamon and USWB methods are minimal and sometimes perform well with remotely collected data. The Hamon method requires average daily air temperature, and the USWB method requires daily averages of air temperature, relative humidity, wind speed, and solar radiation. Estimates of annual lake evaporation from pan data are frequently within 20 percent of energy-budget method estimates. Results of evaporation estimates from the Hamon method and the USWB method were compared against historical pan data at five selected reservoirs in Texas (Benbrook Lake, Canyon Lake, Granger Lake, Hords Creek Lake, and Sam
Westgate, Philip M
2014-05-01
Generalized estimating equations (GEE) are commonly used for the marginal analysis of correlated data, although the quadratic inference function (QIF) approach is an alternative that is increasing in popularity. This method optimally combines distinct sets of unbiased estimating equations that are based upon a working correlation structure, therefore asymptotically increasing or maintaining estimation efficiency relative to GEE. However, in finite samples, additional estimation variability arises when combining these sets of estimating equations, and therefore the QIF approach is not guaranteed to work as well as GEE. Furthermore, estimation efficiency can be improved for both analysis methods by accurate modeling of the correlation structure. Our goal is to improve parameter estimation, relative to existing methods, by simultaneously selecting a working correlation structure and choosing between GEE and two versions of the QIF approach. To do this, we propose the use of a criterion based upon the trace of the empirical covariance matrix (TECM). To make GEE and both QIF versions directly comparable for any given working correlation structure, the proposed TECM utilizes a penalty to account for the finite-sample variance inflation that can occur with either version of the QIF approach. Via a simulation study and in application to a longitudinal study, we show that penalizing the variance inflation that occurs with the QIF approach is necessary and that the proposed criterion works very well. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Charo-Karisa, H.; Komen, J.; Rezk, M.A.; Ponzoni, R.W.; Arendonk, van J.A.M.; Bovenhuis, H.
2006-01-01
This study presents results of two generations of selection (G1 and G2) for growth of Nile tilapia. The selection environment consisted of earthen ponds which were fertilized daily with 50 kg dry matter (dm)/ha chicken manure. No supplementary feeds were provided. In total, 6429 fully pedigreed expe
Capesius, Joseph P.; Arnold, L. Rick
2012-01-01
The U.S. Geological Survey, in cooperation with the Colorado Water Conservation Board, compared two methods for estimating base flow in three reaches of the South Platte River between Denver and Kersey, Colorado. The two methods compared in this study are the Mass Balance and the Pilot Point methods. Base-flow estimates made with the two methods were based upon a 54-year period of record (1950 to 2003).
Christensen, R.C.; Johnson, E.B.; Plantz, G.G.
1986-01-01
Methods are presented for estimating 10 streamflow characteristics at three types of sites on natural flow streams in the Colorado River Basin in Utah. The streamflow characteristics include average discharge and annual maximum 1-, 7-, and 15-day mean discharges for recurrence intervals of 10, 50 and 100 years. At or near gaged sites, two methods weight gaging station data with regression equation values to estimate streamflow characteristics. At sites on ungaged streams, a method estimates streamflow characteristics using regression equations. The regression equations relate the streamflow characteristics to the following basin and climatic characteristics: contributing drainage area, mean basin elevation, mean annual precipitation, main channel slope, and forested area. Separate regression equations were developed for four hydrologically distinct regions in the study area. The standard error of estimate for the 10 streamflow characteristics ranges from 13% to 87%. Basin, climatic, and streamflow characteristics, available as of September 30, 1981, are presented for 135 gaging stations in Utah, Arizona, Colorado, and Wyoming. In addition, weighted estimates of the streamflow characteristics based on station data and the regression equation estimates are provided for most gaging stations. (Author 's abstract)
Constantin von zur Mühlen
2008-03-01
Full Text Available Recent progress in molecular magnetic resonance imaging (MRI provides the opportunity to image cells and cellular receptors using microparticles of iron oxide (MPIOs. However, imaging targets on vessel walls remains challenging owing to the quantity of contrast agents delivered to areas of interest under shear stress conditions. We evaluated ex vivo binding characteristics of a functional MRI contrast agent to ligand-induced binding sites (LIBSs on activated glycoprotein IIb/IIIa receptors of human platelets, which were lining rupture-prone atherosclerotic plaques and could therefore facilitate detection of platelet-mediated pathology in atherothrombotic disease. MPIOs were conjugated to anti-LIBS single-chain antibodies (LIBS-MPIO or control antibodies (control MPIO. Ex vivo binding to human platelet-rich clots in a dose-dependent manner was confirmed on a 3 T clinical MRI scanner and by histology (p < .05 for LIBS-MPIO vs control MPIO. By using a flow chamber setup, significant binding of LIBS-MPIO to a platelet matrix was observed under venous and arterial flow conditions, but not for control MPIO (p < .001. A newly generated MRI contrast agent detects activated human platelets at clinically relevant magnetic field strengths and binds to platelets under venous and arterial flow conditions, conveying high payloads of contrast to specific molecular targets. This may provide the opportunity to identify vulnerable, rupture-prone atherosclerotic plaques via noninvasive MRI.
Carlos Oliva; José Benito; Ronald Acuña; Ana Bocanegra; Jhordan Baltazar
2014-01-01
Cocoa farming is one of the most important activities in the Peruvian Amazon. Its genetic basis is supported by the introduction of improved clones, so that the local genetic potential are in germplasm bank is not well used due to limited breeding study. This study aimed to estimate the repeatability for the genetic selection of high-yield trees with aromatic cocoa beans. Genetic selection analysis was executed with the SOFTWARE SELEGEN REML / BLUP in a 3-year data for assessing grain yield k...
Nagaraja, Mavinakoppa S.; Bhardwaj, Ajay Kumar; Prabhakara Reddy, G. V.; Srinivasamurthy, Chilakunda A.; Kumar, Sandeep
2016-06-01
Soil fertility and organic carbon (C) stock estimations are crucial to soil management, especially that of degraded soils, for productive agricultural use and in soil C sequestration studies. Currently, estimations based on generalized soil mass (hectare furrow basis) or bulk density are used which may be suitable for normal agricultural soils, but not for degraded soils. In this study, soil organic C, available nitrogen (N), available phosphorus (P2O5) and available potassium (K2O), and their stocks were estimated using three methods: (i) generalized soil mass (GSM, 2 million kg ha-1 furrow soil), (ii) bulk-density-based soil mass (BDSM) and (iii) the proportion of fine earth volume (FEV) method, for soils sampled from physically degraded lands in the eastern dry zone of Karnataka State in India. Comparative analyses using these methods revealed that the soil organic C, N, P2O and K2O stocks determined by using BDSM were higher than those determined by the GSM method. The soil organic C values were the lowest in the FEV method. The GSM method overestimated soil organic C, N, P2O and K2O by 9.3-72.1, 9.5-72.3, 7.1-66.6 and 9.2-72.3 %, respectively, compared to FEV-based estimations for physically degraded soils. The differences among the three methods of estimation were lower in soils with low gravel content and increased with an increase in gravel volume. There was overestimation of soil organic C and soil fertility with GSM and BDSM methods. A reassessment of methods of estimation was, therefore, attempted to provide fair estimates for land development projects in degraded lands.
Pithan, Felix; Ackerman, Andrew; Angevine, Wayne M.; Hartung, Kerstin; Ickes, Luisa; Kelley, Maxwell; Medeiros, Brian; Sandu, Irina; Steeneveld, Gert-Jan; Sterk, H. A. M.; Svensson, Gunilla; Vaillancourt, Paul A.; Zadra, Ayrton
2016-09-01
Weather and climate models struggle to represent lower tropospheric temperature and moisture profiles and surface fluxes in Arctic winter, partly because they lack or misrepresent physical processes that are specific to high latitudes. Observations have revealed two preferred states of the Arctic winter boundary layer. In the cloudy state, cloud liquid water limits surface radiative cooling, and temperature inversions are weak and elevated. In the radiatively clear state, strong surface radiative cooling leads to the build-up of surface-based temperature inversions. Many large-scale models lack the cloudy state, and some substantially underestimate inversion strength in the clear state. Here, the transformation from a moist to a cold dry air mass is modeled using an idealized Lagrangian perspective. The trajectory includes both boundary layer states, and the single-column experiment is the first Lagrangian Arctic air formation experiment (Larcform 1) organized within GEWEX GASS (Global atmospheric system studies). The intercomparison reproduces the typical biases of large-scale models: some models lack the cloudy state of the boundary layer due to the representation of mixed-phase microphysics or to the interaction between micro- and macrophysics. In some models, high emissivities of ice clouds or the lack of an insulating snow layer prevent the build-up of surface-based inversions in the radiatively clear state. Models substantially disagree on the amount of cloud liquid water in the cloudy state and on turbulent heat fluxes under clear skies. Observations of air mass transformations including both boundary layer states would allow for a tighter constraint of model behavior.
Ries, Kernell G.; Eng, Ken
2010-01-01
The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, operated a network of 20 low-flow partial-record stations during 2008 in a region that extends from southwest of Baltimore to the northeastern corner of Maryland to obtain estimates of selected streamflow statistics at the station locations. The study area is expected to face a substantial influx of new residents and businesses as a result of military and civilian personnel transfers associated with the Federal Base Realignment and Closure Act of 2005. The estimated streamflow statistics, which include monthly 85-percent duration flows, the 10-year recurrence-interval minimum base flow, and the 7-day, 10-year low flow, are needed to provide a better understanding of the availability of water resources in the area to be affected by base-realignment activities. Streamflow measurements collected for this study at the low-flow partial-record stations and measurements collected previously for 8 of the 20 stations were related to concurrent daily flows at nearby index streamgages to estimate the streamflow statistics. Three methods were used to estimate the streamflow statistics and two methods were used to select the index streamgages. Of the three methods used to estimate the streamflow statistics, two of them--the Moments and MOVE1 methods--rely on correlating the streamflow measurements at the low-flow partial-record stations with concurrent streamflows at nearby, hydrologically similar index streamgages to determine the estimates. These methods, recommended for use by the U.S. Geological Survey, generally require about 10 streamflow measurements at the low-flow partial-record station. The third method transfers the streamflow statistics from the index streamgage to the partial-record station based on the average of the ratios of the measured streamflows at the partial-record station to the concurrent streamflows at the index streamgage. This method can be used with as few as
Evans David
2012-10-01
Full Text Available Abstract Background Directed acyclic graphs (DAGs are an effective means of presenting expert-knowledge assumptions when selecting adjustment variables in epidemiology, whereas the change-in-estimate procedure is a common statistics-based approach. As DAGs imply specific empirical relationships which can be explored by the change-in-estimate procedure, it should be possible to combine the two approaches. This paper proposes such an approach which aims to produce well-adjusted estimates for a given research question, based on plausible DAGs consistent with the data at hand, combining prior knowledge and standard regression methods. Methods Based on the relationships laid out in a DAG, researchers can predict how a collapsible estimator (e.g. risk ratio or risk difference for an effect of interest should change when adjusted on different variable sets. Implied and observed patterns can then be compared to detect inconsistencies and so guide adjustment-variable selection. Results The proposed approach involves i. drawing up a set of plausible background-knowledge DAGs; ii. starting with one of these DAGs as a working DAG, identifying a minimal variable set, S, sufficient to control for bias on the effect of interest; iii. estimating a collapsible estimator adjusted on S, then adjusted on S plus each variable not in S in turn (“add-one pattern” and then adjusted on the variables in S minus each of these variables in turn (“minus-one pattern”; iv. checking the observed add-one and minus-one patterns against the pattern implied by the working DAG and the other prior DAGs; v. reviewing the DAGs, if needed; and vi. presenting the initial and all final DAGs with estimates. Conclusion This approach to adjustment-variable selection combines background-knowledge and statistics-based approaches using methods already common in epidemiology and communicates assumptions and uncertainties in a standardized graphical format. It is probably best suited to
丛秀娟
2011-01-01
通过对安全系数选取中的冲突进行分析和定义,在发明问题解决理论（TRIZ）中选择相应的冲突解决原理,提出一种解决此冲突的建议方案。将应力-强度干涉模型的干涉区和安全系数的选取相结合,得出干涉区与安全系数的关系。分析得到,当应力和强度的标准差一定时,均值安全系数越高或均值差越高,干涉区的面积越小,要得到相同的可靠度,则所选取的安全系数应偏小,这样即可满足要求;反之,所选取的安全系数应偏大。当应力和强度的均值一定时,标准差越大,分布的离散程度就越大,干涉区的面积越大,要得到相同的可靠度,所选取的安全系数应偏大;反之,所选取的安全系数应偏小。最后,通过起重机新旧设计规范中安全系数选取的对比验证了此方法的可行性。%Through analysis and definition of the conflict in the selection of safety factor,and selection of the appropriate principles of conflict resolution in the Theory of Inventive Problem Solving（TRIZ）,a kind of proposal has been put forward to resolve this conflict.Combined the load-strength interference model of intervention areas with the selection of safety factor,the relationship between them is obtained.So,when the stress and strength standard deviation is constant,the higher the safety factor mean or the higher the mean difference,the smaller the size of interference area.To get the same reliability,safety factor is selected to be smaller,so you can meet the requirements;otherwise,the larger safety factor should be selected.When the stress force and strength of the mean is fixed,the greater the standard deviation,the greater the degree of dispersion distribution,the greater the size of interference area;to get the same reliability,the selected safety factor should be larger;if not,the selected safety factor should be smaller.Finally,through contrast the old and new design of crane in the selection of safety
Rothstein, Jesse
2009-01-01
Non-random assignment of students to teachers can bias value added estimates of teachers' causal effects. Rothstein (2008a, b) shows that typical value added models indicate large counter-factual effects of 5th grade teachers on students' 4th grade learning, indicating that classroom assignments are far from random. This paper quantifies the…
Selecting best-fit models for estimating the body mass from 3D data of the human calcaneus.
Jung, Go-Un; Lee, U-Young; Kim, Dong-Ho; Kwak, Dai-Soon; Ahn, Yong-Woo; Han, Seung-Ho; Kim, Yi-Suk
2016-05-01
Body mass (BM) estimation could facilitate the interpretation of skeletal materials in terms of the individual's body size and physique in forensic anthropology. However, few metric studies have tried to estimate BM by focusing on prominent biomechanical properties of the calcaneus. The purpose of this study was to prepare best-fit models for estimating BM from the 3D human calcaneus by two major linear regression analysis (the heuristic statistical and all-possible-regressions techniques) and validate the models through predicted residual sum of squares (PRESS) statistics. A metric analysis was conducted based on 70 human calcaneus samples (29 males and 41 females) taken from 3D models in the Digital Korean Database and 10 variables were measured for each sample. Three best-fit models were postulated by F-statistics, Mallows' Cp, and Akaike information criterion (AIC) and Bayes information criterion (BIC) for each available candidate models. Finally, the most accurate regression model yields lowest %SEE and 0.843 of R(2). Through the application of leave-one-out cross validation, the predictive power was indicated a high level of validation accuracy. This study also confirms that the equations for estimating BM using 3D models of human calcaneus will be helpful to establish identification in forensic cases with consistent reliability. Copyright © 2016. Published by Elsevier Ireland Ltd.
张颖军; 朱锡; 梅志远; 李华东
2012-01-01
Considering the effect of ageing factors to composite in engineering, an advanced estimating equation of ageing on polymer matrix composites residual strength is founded by studying and analyzing the ageing equation proposed by Gunyaev r m. The advanced ageing equation which is adopted to estimate the residual strength is fit here by least squares method with the ageing experimental data. The results indicate that the theoretical calculating data is consistent with the experimental data. The advanced ageing equation used among different ageing environment as considering equivalent effect of ageing factors can reduce experimental times of the same material in different ageing environment. The work provides a reference for ageing equivalent study of polymer matrix composites between nature ageing and accelerated ageing.%通过对r.M.古尼耶夫提出的老化方程的分析研究,考虑了各种老化因素在实际工程中对复合材料的影响,建立了改进的聚合物基复合材料老化剩余强度估算方程.结合老化试验数据,采用最小二乘非线性回归计算方法拟合老化方程,并进行剩余强度估算.结果表明,理论计算结果与试验结果吻合较好,同时由于改进公式考虑了环境老化因素对材料影响的等效,使得改进公式能够在不同老化环境中进行等效计算,可以减少同种材料的不同老化环境试验次数.本实验对聚合物基复合材料在自然老化和加速老化环境下的等效研究具有参考价值.
Sverrisdóttir, Oddny Ósk; Timpson, Adrian; Toombs, Jamie; Lecoeur, Cecile; Froguel, Philippe; Carretero, Jose Miguel; Arsuaga Ferreras, Juan Luis; Götherström, Anders; Thomas, Mark G
2014-04-01
Lactase persistence (LP) is a genetically determined trait whereby the enzyme lactase is expressed throughout adult life. Lactase is necessary for the digestion of lactose--the main carbohydrate in milk--and its production is downregulated after the weaning period in most humans and all other mammals studied. Several sources of evidence indicate that LP has evolved independently, in different parts of the world over the last 10,000 years, and has been subject to strong natural selection in dairying populations. In Europeans, LP is strongly associated with, and probably caused by, a single C to T mutation 13,910 bp upstream of the lactase (LCT) gene (-13,910*T). Despite a considerable body of research, the reasons why LP should provide such a strong selective advantage remain poorly understood. In this study, we examine one of the most widely cited hypotheses for selection on LP--that fresh milk consumption supplemented the poor vitamin D and calcium status of northern Europe's early farmers (the calcium assimilation hypothesis). We do this by testing for natural selection on -13,910*T using ancient DNA data from the skeletal remains of eight late Neolithic Iberian individuals, whom we would not expect to have poor vitamin D and calcium status because of relatively high incident UVB light levels. None of the eight samples successfully typed in the study had the derived T-allele. In addition, we reanalyze published data from French Neolithic remains to both test for population continuity and further examine the evolution of LP in the region. Using simulations that accommodate genetic drift, natural selection, uncertainty in calibrated radiocarbon dates, and sampling error, we find that natural selection is still required to explain the observed increase in allele frequency. We conclude that the calcium assimilation hypothesis is insufficient to explain the spread of LP in Europe.
Hovgård, Holger
1996-01-01
powers of different netting sections. Gilling was about three times as efficient as maxillae catching and fishing power could be related to the ratio of twine diameter to mesh size. It is proposed that information on how fish are caught should be included when modelling gill-net selectivity, as lack...
Unbiased risk estimation method for covariance estimation
Lescornel, Hélène; Chabriac, Claudie
2011-01-01
We consider a model selection estimator of the covariance of a random process. Using the Unbiased Risk Estimation (URE) method, we build an estimator of the risk which allows to select an estimator in a collection of model. Then, we present an oracle inequality which ensures that the risk of the selected estimator is close to the risk of the oracle. Simulations show the efficiency of this methodology.
Vining, Kevin C.; Vecchia, Aldo V.
2014-01-01
The U.S. Geological Survey, in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, used the stochastic monthly water-balance model and existing climate data to estimate monthly streamflows for 1951–2010 for selected streamgaging stations located within the Aynak copper, cobalt, and chromium area of interest in Afghanistan. The model used physically based, nondeterministic methods to estimate the monthly volumetric water-balance components of a watershed. A comparison of estimated and recorded monthly streamflows for the streamgaging stations Kabul River at Maidan and Kabul River at Tangi-Saidan indicated that the stochastic water-balance model was able to provide satisfactory estimates of monthly streamflows for high-flow months and low-flow months even though withdrawals for irrigation likely occurred. A comparison of estimated and recorded monthly streamflows for the streamgaging stations Logar River at Shekhabad and Logar River at Sangi-Naweshta also indicated that the stochastic water-balance model was able to provide reasonable estimates of monthly streamflows for the high-flow months; however, for the upstream streamgaging station, the model overestimated monthly streamflows during periods when summer irrigation withdrawals likely occurred. Results from the stochastic water-balance model indicate that the model should be able to produce satisfactory estimates of monthly streamflows for locations along the Kabul and Logar Rivers. This information could be used by Afghanistan authorities to make decisions about surface-water resources for the Aynak copper, cobalt, and chromium area of interest.
Eash, David A.; Barnes, Kimberlee K.; O'Shea, Padraic S.
2016-09-19
A statewide study was led to develop regression equations for estimating three selected spring and three selected fall low-flow frequency statistics for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include spring (April through June) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and fall (October through December) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years. Estimates of the three selected spring statistics are provided for 241 U.S. Geological Survey continuous-record streamgages, and estimates of the three selected fall statistics are provided for 238 of these streamgages, using data through June 2014. Because only 9 years of fall streamflow record were available, three streamgages included in the development of the spring regression equations were not included in the development of the fall regression equations. Because of regulation, diversion, or urbanization, 30 of the 241 streamgages were not included in the development of the regression equations. The study area includes Iowa and adjacent areas within 50 miles of the Iowa border. Because trend analyses indicated statistically significant positive trends when considering the period of record for most of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. Geographic information system software was used to measure 63 selected basin characteristics for each of the 211streamgages used to develop the regional regression equations. The study area was divided into three low-flow regions that were defined in a previous study for the development of regional regression equations.Because several streamgages included in the development of regional regression equations have estimates of zero flow calculated from observed streamflow for selected spring and fall low-flow frequency statistics, the final equations for the
Bruno Martin; Giampiero Lombardi; Philippe Pradel,; Anne Farruggia; Mauro Coppa
2011-01-01
Research has recently focused on pasture species intake by ruminants due to their influence on animal product quality. A field-applicable method which investigates species intake and selection, was tested on two dairy cow grazing systems: continuous grazing on a highly-biodiverse pasture (C) and rotational grazing on a moderately-diverse sward (R). In addition to the grazed class method, which evaluates the percentage of grazed dry matter (DM) per species according to the residual height of t...
Bruno Martin
2011-01-01
Full Text Available Research has recently focused on pasture species intake by ruminants due to their influence on animal product quality. A field-applicable method which investigates species intake and selection, was tested on two dairy cow grazing systems: continuous grazing on a highly-biodiverse pasture (C and rotational grazing on a moderately-diverse sward (R. In addition to the grazed class method, which evaluates the percentage of grazed dry matter (DM per species according to the residual height of the plant grazed, further measurements were introduced to quantify DM consumption and selection index per species. Six and four representative species were studied in the C and R systems respectively. We found an exponential regression between the presence of a species and its contribution to the cattle’s daily intake (P<0.01. On the C plot, Festuca nigrescens showed the highest intake (6.2 kg DM/cow d, even if avoided. On the R plot, Taraxacum officinale was intensively consumed (6.1 kg DM/cow d, even cows do not express positive selection for the species, while Poaceae were avoided. Giving details on species consumption, the improved grazed class method may prove especially useful in non-experimental conditions in biodiverse sward to address grazing management to the consumption of species able to give specific characteristics to dairy products.
Olson, Scott A.; with a section by Veilleux, Andrea G.
2014-01-01
This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.
Goad, Clyde C.; Chadwell, C. David
1993-01-01
GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the
Vinayaraj, P.; Johnson, G.; Dora, G.U.; Philip, C.S.; SanilKumar, V.; Gowthaman, R.
station are presented by overlaying to- gether. Quantification of erosion/accretion rate is done by digitization as polygon features using ArcGIS. Coastal process are not uniform with respect to time and it is difficult to compare two scenes taken... at differ- ent time because of the non-uniform tides of coastal area i.e., if it is low tide in one scene, it may be high tide in other one. So there is chance of error in estimation of erosion/accretion [10]. For minimizing the error, the data during...
Ali A. Rostami
2016-08-01
Full Text Available Concerns have been raised in the literature for the potential of secondhand exposure from e-vapor product (EVP use. It would be difficult to experimentally determine the impact of various factors on secondhand exposure including, but not limited to, room characteristics (indoor space size, ventilation rate, device specifications (aerosol mass delivery, e-liquid composition, and use behavior (number of users and usage frequency. Therefore, a well-mixed computational model was developed to estimate the indoor levels of constituents from EVPs under a variety of conditions. The model is based on physical and thermodynamic interactions between aerosol, vapor, and air, similar to indoor air models referred to by the Environmental Protection Agency. The model results agree well with measured indoor air levels of nicotine from two sources: smoking machine-generated aerosol and aerosol exhaled from EVP use. Sensitivity analysis indicated that increasing air exchange rate reduces room air level of constituents, as more material is carried away. The effect of the amount of aerosol released into the space due to variability in exhalation was also evaluated. The model can estimate the room air level of constituents as a function of time, which may be used to assess the level of non-user exposure over time.
Janoos, Firdaus; Pursley, Jennifer; Fedorov, Andriy; Tempany, Clare; Cormack, Robert A.; Wells, William M.
2013-01-01
Transrectal ultrasound (TRUS) facilitates intra-treatment delineation of the prostate gland (PG) to guide insertion of brachytherapy seeds, but the prostate substructure and apex are not always visible which may make the seed placement sub-optimal. Based on an elastic model of the prostate created from MRI, where the prostate substructure and apex are clearly visible, we use a Bayesian approach to estimate the posterior distribution on deformations that aligns the pre-treatment MRI with intra-treatment TRUS. Without apex information in TRUS, the posterior prediction of the location of the prostate boundary, and the prostate apex boundary in particular, is mainly determined by the pseudo stiffness hyper-parameter of the prior distribution. We estimate the optimal value of the stiffness through likelihood maximization that is sensitive to the accuracy as well as the precision of the posterior prediction at the apex boundary. From a data-set of 10 pre- and intra-treatment prostate images with ground truth delineation of the total PG, 4 cases were used to establish an optimal stiffness hyper-parameter when 15% of the prostate delineation was removed to simulate lack of apex information in TRUS, while the remaining 6 cases were used to cross-validate the registration accuracy and uncertainty over the PG and in the apex. PMID:23286120
Fernanda Sakamoto, Apostolos Doukas, William Farinelli, Zeina Tannous, Michelle D. Shinn, Stephen Benson, Gwyn P. Williams, H. Dylla, Richard Anderson
2011-12-01
The success of permanent laser hair removal suggests that selective photothermolysis (SP) of sebaceous glands, another part of hair follicles, may also have merit. About 30% of sebum consists of fats with copious CH2 bond content. SP was studied in vitro, using free electron laser (FEL) pulses at an infrared CH2 vibrational absorption wavelength band. Absorption spectra of natural and artificially prepared sebum were measured from 200 nm to 3000 nm, to determine wavelengths potentially able to target sebaceous glands. The Jefferson National Accelerator superconducting FEL was used to measure photothermal excitation of aqueous gels, artificial sebum, pig skin, human scalp and forehead skin (sebaceous sites). In vitro skin samples were exposed to FEL pulses from 1620 to 1720 nm, spot diameter 7-9.5 mm with exposure through a cold 4C sapphire window in contact with the skin. Exposed and control tissue samples were stained using H and E, and nitroblue tetrazolium chloride staining (NBTC) was used to detect thermal denaturation. Natural and artificial sebum both had absorption peaks near 1210, 1728, 1760, 2306 and 2346 nm. Laser-induced heating of artificial sebum was approximately twice that of water at 1710 and 1720 nm, and about 1.5x higher in human sebaceous glands than in water. Thermal camera imaging showed transient focal heating near sebaceous hair follicles. Histologically, skin samples exposed to {approx}1700 nm, {approx}100-125 ms pulses showed evidence of selective thermal damage to sebaceous glands. Sebaceous glands were positive for NBTC staining, without evidence of selective loss in samples exposed to the laser. Epidermis was undamaged in all samples. Conclusions: SP of sebaceous glands appears to be feasible. Potentially, optical pulses at {approx}1720 nm or {approx}1210 nm delivered with large beam diameter and appropriate skin cooling in approximately 0.1 s may provide an alternative treatment for acne.
Muscle Strength and Poststroke Hemiplegia
Kristensen, Otto H; Stenager, Egon; Dalgas, Ulrik
2017-01-01
OBJECTIVES: To systematically review (1) psychometric properties of criterion isokinetic dynamometry testing of muscle strength in persons with poststroke hemiplegia (PPSH); and (2) literature that compares muscle strength in patients poststroke with that in healthy controls assessed by criterion...... isokinetic dynamometry. DATA SOURCES: A systematic literature search of 7 databases was performed. STUDY SELECTION: Included studies (1) enrolled participants with definite poststroke hemiplegia according to defined criteria; (2) assessed muscle strength or power by criterion isokinetic dynamometry; (3) had...... undergone peer review; and (4) were available in English or Danish. DATA EXTRACTION: The psychometric properties of isokinetic dynamometry were reviewed with respect to reliability, validity, and responsiveness. Furthermore, comparisons of strength between paretic, nonparetic, and comparable healthy muscles...
DRAGAN CVETKOVIC
2008-10-01
Full Text Available The effects of ultraviolet radiation (UV on the antioxidant action of three selected carotenoids (β-carotene, lycopene and lutein in the presence of a lipoidal lecithin mixture were studied by the DPPH (1,1-diphenyl-2-picrylhydrazyl test. The test is based on the measurement of the decrease of the free DPPH radical absorbance at 517 nm caused by the antioxidant action of carotenoids, which appeared to be strongly affected by UV-action. The high-energy input of the involved UV-photons plays a major governing role.
Eash, David A.; Barnes, Kimberlee K.; O'Shea, Padraic S.
2016-09-19
A statewide study was led to develop regression equations for estimating three selected spring and three selected fall low-flow frequency statistics for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include spring (April through June) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and fall (October through December) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years. Estimates of the three selected spring statistics are provided for 241 U.S. Geological Survey continuous-record streamgages, and estimates of the three selected fall statistics are provided for 238 of these streamgages, using data through June 2014. Because only 9 years of fall streamflow record were available, three streamgages included in the development of the spring regression equations were not included in the development of the fall regression equations. Because of regulation, diversion, or urbanization, 30 of the 241 streamgages were not included in the development of the regression equations. The study area includes Iowa and adjacent areas within 50 miles of the Iowa border. Because trend analyses indicated statistically significant positive trends when considering the period of record for most of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. Geographic information system software was used to measure 63 selected basin characteristics for each of the 211streamgages used to develop the regional regression equations. The study area was divided into three low-flow regions that were defined in a previous study for the development of regional regression equations.Because several streamgages included in the development of regional regression equations have estimates of zero flow calculated from observed streamflow for selected spring and fall low-flow frequency statistics, the final equations for the
Stanton, Jennifer S.; Qi, Sharon L.; Ryter, Derek W.; Falk, Sarah E.; Houston, Natalie A.; Peterson, Steven M.; Westenbroek, Stephen M.; Christenson, Scott C.
2011-01-01
method. Precipitation estimates using these sources, as a 10-year average annual total volume for the High Plains, ranged from 192 to 199 million acre-feet (acre-ft) for 1940 through 1949 and from 185 to 199 million acre-ft for 2000 through 2009. Evapotranspiration was obtained from three sources: the National Weather Service Sacramento-Soil Moisture Accounting model, the Simplified-Surface-Energy-Balance model using remotely sensed data, and the Soil-Water-Balance model. Average annual total evapotranspiration estimated using these sources was 148 million acre-ft for 1940 through 1949 and ranged from 154 to 193 million acre-ft for 2000 through 2009. The maximum amount of shallow groundwater lost to evapotranspiration was approximated for areas where the water table was within 5 feet of land surface. The average annual total volume of evapotranspiration from shallow groundwater was 9.0 million acre-ft for 1940 through 1949 and ranged from 9.6 to 12.6 million acre-ft for 2000 through 2009. Recharge was estimated using two soil-water-balance models as well as previously published studies for various locations across the High Plains region. Average annual total recharge ranged from 8.3 to 13.2 million acre-ft for 1940 through 1949 and from 15.9 to 35.0 million acre-ft for 2000 through 2009. Surface runoff and groundwater discharge to streams were determined using discharge records from streamflow-gaging stations near the edges of the High Plains and the Base-Flow Index program. For 1940 through 1949, the average annual net surface runoff leaving the High Plains was 1.9 million acre-ft, and the net loss from the High Plains aquifer by groundwater discharge to streams was 3.1 million acre-ft. For 2000 through 2009, the average annual net surface runoff leaving the High Plains region was 1.3 million acre-ft and the net loss by groundwater discharge to streams was 3.9 million acre-ft. For 2000 through 2009, the average annual total estimated groundwater pumpage volume from two
Gotvald, Anthony J.
2017-01-13
The U.S. Geological Survey, in cooperation with the Georgia Department of Natural Resources, Environmental Protection Division, developed regional regression equations for estimating selected low-flow frequency and mean annual flow statistics for ungaged streams in north Georgia that are not substantially affected by regulation, diversions, or urbanization. Selected low-flow frequency statistics and basin characteristics for 56 streamgage locations within north Georgia and 75 miles beyond the State’s borders in Alabama, Tennessee, North Carolina, and South Carolina were combined to form the final dataset used in the regional regression analysis. Because some of the streamgages in the study recorded zero flow, the final regression equations were developed using weighted left-censored regression analysis to analyze the flow data in an unbiased manner, with weights based on the number of years of record. The set of equations includes the annual minimum 1- and 7-day average streamflow with the 10-year recurrence interval (referred to as 1Q10 and 7Q10), monthly 7Q10, and mean annual flow. The final regional regression equations are functions of drainage area, mean annual precipitation, and relief ratio for the selected low-flow frequency statistics and drainage area and mean annual precipitation for mean annual flow. The average standard error of estimate was 13.7 percent for the mean annual flow regression equation and ranged from 26.1 to 91.6 percent for the selected low-flow frequency equations.The equations, which are based on data from streams with little to no flow alterations, can be used to provide estimates of the natural flows for selected ungaged stream locations in the area of Georgia north of the Fall Line. The regression equations are not to be used to estimate flows for streams that have been altered by the effects of major dams, surface-water withdrawals, groundwater withdrawals (pumping wells), diversions, or wastewater discharges. The regression
Vieira, G. L. S.
2017-06-01
Full Text Available Considering a direct correlation between projects requirements details levels and their performance, this paper aims to evaluate whether the adoption of more extensive and detailed cost, time and scope estimation processes based on both practices, traditional and agile, and executed concurrently with the supplier selection stage, could guarantee greater accuracy in these estimates, thus increasing project success rates. Based on a case study for the information system project implementation into the legal area of a large Brazilian company, five suppliers had their proposals analyzed and compared in terms of the costs and deadlines involved, as well as the project management processes used in theirs estimates. From the obtained results, it was possible to observe that not all companies follow, at least during the prospecting phase, their service proposals described management processes, according to the theory. Another important finding was that the proposals involving, at least partially, agile approach concepts, were more likely to justify their estimates. These proposals still presented lower values, whenever compared to those less adherents to the theoretical concepts, as those based on traditional concepts.
Szczerbiński, Robert; Karczewski, Jan
2011-01-01
The work aimed at estimating intake of food containing permissible preservatives. The data was comprised of food samples from 14 poviats of Podlaskie voivodeship taken to detect presence of preservatives (sodium nitrate, nitrite, benzoic acid and its salt, sorbic acid and its salt). The samples were collected between 2004 and 2007 by food inspection agency. Data concerning consumption of food provided results for an average consumption of some foodstuffs in households in which consumption of given foodstuff has been recorded by Polish Central Statistical Office, whereas data concerning consumption of soft drinks was provided by the report from March, 2008 (soft drinks market in Poland). It was stated that an average intake of the considered preservatives with an average diet is not a threat to people. Taking into account the fact data concerning consumption of foodstuffs in households is limited, it is advised to create databases comprising consumption of foodstuffs which would help in more precise evaluation of the intake.
Hald, Tine; Aspinall, Willy; Devleesschauwer, Brecht
2016-01-01
to the global burden of diseases commonly transmitted through the consumption of food.Methods and FindingsWe applied structured expert judgment using Cooke's Classical Model to obtain estimates for 14 subregions for the relative contributions of different transmission pathways for eleven diarrheal diseases......, seven other infectious diseases and one chemical (lead). Experts were identified through international networks followed by social network sampling. Final selection of experts was based on their experience including international working experience. Enrolled experts were scored on their ability to judge...
Garcia R, A. [ININ, Carretera Mexico-Toluca S/N, 52750 La Marquesa, Ocoyoacac, Estado de Mexico (Mexico)]. e-mail: ramador@nuclear.inin.mx
2007-07-01
At the moment the signals are used to diagnose the state of the systems, by means of the extraction of their more important characteristics such as the frequencies, tendencies, changes and temporary evolutions. This characteristics are detected by means of diverse analysis techniques, as Autoregressive methods, Fourier Transformation, Fourier transformation in short time, Wavelet transformation, among others. The present work uses the one Wavelet transformation because it allows to analyze stationary, quasi-stationary and transitory signals in the time-frequency plane. It also describes a methodology to select the scales and the Wavelet function to be applied the one Wavelet transformation with the objective of detecting to the dominant system frequencies. (Author)
The Impact of Learning Curve Model Selection and Criteria for Cost Estimation Accuracy in the DoD
2016-04-30
qÜáêíÉÉåíÜ=^ååì~ä= ^Åèìáëáíáçå=oÉëÉ~êÅÜ= póãéçëáìã= qÜìêëÇ~ó=pÉëëáçåë= sçäìãÉ=ff= = The Impact of Learning Curve Model Selection and Criteria for Cost...Assistant Division Director, Institute for Defense Analyses Bruce Harmon, Research Staff Member, Institute for Defense Analyses The Impact of Learning ...Army Contracting Command ^Åèìáëáíáçå=oÉëÉ~êÅÜ=mêçÖê~ãW= `êÉ~íáåÖ=póåÉêÖó=Ñçê=fåÑçêãÉÇ=`Ü~åÖÉ= - 469 - The Impact of Learning Curve Model
Jevrić Lidija R.
2013-01-01
Full Text Available The estimation of retention factors by correlation equations with physico-chemical properties can be of great helpl in chromatographic studies. The retention factors were experimentally measured by RP-HPTLC on impregnated silica gel with paraffin oil using two-component solvent systems. The relationships between solute retention and modifier concentration were described by Snyder’s linear equation. A quantitative structure-retention relationship was developed for a series of s-triazine compounds by the multiple linear regression (MLR analysis. The MLR procedure was used to model the relationships between the molecular descriptors and retention of s-triazine derivatives. The physicochemical molecular descriptors were calculated from the optimized structures. The physico-chemical properties were the lipophilicity (log P, connectivity indices (χ, total energy (Et, water solubility (log W, dissociation constant (pKa, molar refractivity (MR, and Gibbs energy (GibbsE of s-triazines. A high agreement between the experimental and predicted retention parameters was obtained when the dissociation constant and the hydrophilic-lipophilic balance were used as the molecular descriptors. The empirical equations may be successfully used for the prediction of the various chromatographic characteristics of substances, with a similar chemical structure. [Projekat Ministarstva nauke Republike Srbije, br. 31055, br. 172012, br. 172013 i br. 172014
Settumba, Stella Nalukwago; Sweeney, Sedona; Seeley, Janet; Biraro, Samuel; Mutungi, Gerald; Munderi, Paula; Grosskurth, Heiner; Vassall, Anna
2015-06-01
To explore the chronic disease services in Uganda: their level of utilisation, the total service costs and unit costs per visit. Full financial and economic cost data were collected from 12 facilities in two districts, from the provider's perspective. A combination of ingredients-based and step-down allocation costing approaches was used. The diseases under study were diabetes, hypertension, chronic obstructive pulmonary disease (COPD), epilepsy and HIV infection. Data were collected through a review of facility records, direct observation and structured interviews with health workers. Provision of chronic care services was concentrated at higher-level facilities. Excluding drugs, the total costs for NCD care fell below 2% of total facility costs. Unit costs per visit varied widely, both across different levels of the health system, and between facilities of the same level. This variability was driven by differences in clinical and drug prescribing practices. Most patients reported directly to higher-level facilities, bypassing nearby peripheral facilities. NCD services in Uganda are underfunded particularly at peripheral facilities. There is a need to estimate the budget impact of improving NCD care and to standardise treatment guidelines. © 2015 The Authors. Tropical Medicine & International Health Published by John Wiley & Sons Ltd.
Estimation of Flammability Limits of Selected Fluorocarbons with F(sub 2) and CIF(sub3)
Trowbridge, L.D.
1999-09-01
During gaseous diffusion plant operations, conditions leading to the formation of flammable gas mixtures may occasionally arise. Currently, these could consist of the evaporative coolant CFC-114 and fluorinating agents such as F(sub 2) and CIF(sub 3). Replacement of CFC-114 with non-ozone-depleting substitutes such as c-C(sub 4)F(sub 8) and C(sub 4)F(sub 10) is planned. Consequently, in the future, these too must be considered potential ''fuels'' in flammable gas mixtures. Two questions of practical interest arise: (1) can a particular mixture sustain and propagate a flame if ignited, and (2) what is the maximum pressure that can be generated by the burning (and possibly exploding) gas mixture, should ignite? Experimental data on these systems are limited. To assist in answering these questions, a literature search for relevant data was conducted, and mathematical models were developed to serve as tools for predicting potential detonation pressures and estimating (based on empirical correlations between gas mixture thermodynamics and flammability for known systems) the composition limits of flammability for these systems. The models described and documented in this report are enhanced versions of similar models developed in 1992.
Statistical Analysis of Data for Timber Strengths
Sørensen, John Dalsgaard
2003-01-01
Statistical analyses are performed for material strength parameters from a large number of specimens of structural timber. Non-parametric statistical analysis and fits have been investigated for the following distribution types: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull....... The statistical fits have generally been made using all data and the lower tail of the data. The Maximum Likelihood Method and the Least Square Technique have been used to estimate the statistical parameters in the selected distributions. The results show that the 2-parameter Weibull distribution gives the best...... fits to the data available, especially if tail fits are used whereas the Log Normal distribution generally gives a poor fit and larger coefficients of variation, especially if tail fits are used. The implications on the reliability level of typical structural elements and on partial safety factors...
Statistical Analysis of Data for Timber Strengths
Sørensen, John Dalsgaard; Hoffmeyer, P.
Statistical analyses are performed for material strength parameters from approximately 6700 specimens of structural timber. Non-parametric statistical analyses and fits to the following distributions types have been investigated: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull....... The statistical fits have generally been made using all data (100%) and the lower tail (30%) of the data. The Maximum Likelihood Method and the Least Square Technique have been used to estimate the statistical parameters in the selected distributions. 8 different databases are analysed. The results show that 2......-parameter Weibull (and Normal) distributions give the best fits to the data available, especially if tail fits are used whereas the LogNormal distribution generally gives poor fit and larger coefficients of variation, especially if tail fits are used....
Kenny, Joan F.; Juracek, Kyle E.
2012-01-01
Domestic water-use and related socioeconomic and climatic data for 2005-10 were used in an analysis of 21 selected U.S. cities to describe recent domestic per capita water use, investigate variables that potentially affect domestic water use, and provide guidance for estimating domestic water use. Domestic water use may be affected by a combination of several factors. Domestic per capita water use for the selected cities ranged from a median annual average of 43 to 177 gallons per capita per day (gpcd). In terms of year-to-year variability in domestic per capita water use for the selected cities, the difference from the median ranged from ± 7 to ± 26 percent with an overall median variability of ± 14 percent. As a percentage of total annual water use, median annual domestic water use for the selected cities ranged from 33 to 71 percent with an overall median of 57 percent. Monthly production and water sales data were used to calculate daily per capita water use rates for the lowest 3 consecutive months (low-3) and the highest 3 consecutive months (high-3) of usage. Median low-3 domestic per capita water use for 16 selected cities ranged from 40 to 100 gpcd. Median high-3 domestic per capita water use for 16 selected cities ranged from 53 to 316 gpcd. In general, the median domestic water use as a percentage of the median total water use for 16 selected cities was similar for the low-3 and high-3 periods. Statistical analyses of combined data for the selected cities indicated that none of the socioeconomic variables, including cost of water, were potentially useful as determinants of domestic water use at the national level. However, specific socioeconomic variables may be useful for the estimation of domestic water use at the State or local level. Different socioeconomic variables may be useful in different States. Statistical analyses indicated that specific climatic variables may be useful for the estimation of domestic water use for some, but not all, of the
Carroll, Rebecca I; Forbes, Andrew; Graham, David A; Messam, Locksley L McV
2017-09-01
Abattoir surveys and findings from post-mortem meat inspection are commonly used to estimate infection or disease prevalence in farm animal populations. However, the function of an abattoir is to slaughter animals for human consumption, and the collection of information on animal health for research purposes is a secondary objective. This can result in methodological shortcomings leading to biased prevalence estimates. Selection bias can occur when the study population as obtained from the abattoir is not an accurate representation of the target population. Virtually all of the tests used in abattoir surveys to detect infections or diseases that impact animal health are imperfect, leading to errors in identifying the outcome of interest and consequently, information bias. Examination of abattoir surveys estimating prevalence in the literature reveals shortcomings in the methods used in these studies. While the STROBE-Vet statement provides clear guidance on the reporting of observational research, we have not found any guidelines in the literature advising researchers on how to conduct abattoir surveys. This paper presents a protocol in two flowcharts to help researchers (regardless of their background in epidemiology) to first identify, and, where possible, minimise biases in abattoir surveys estimating prevalence. Flowchart 1 examines the identification of the target population and the appropriate study population while Flowchart 2 guides the researcher in identifying, and, where possible, correcting potential sources of outcome misclassification. Examples of simple sensitivity analyses are also presented which approximate the likely uncertainty in prevalence estimates due to systematic errors. Finally, the researcher is directed to outline any limitations of the study in the discussion section of the paper. This protocol makes it easier to conduct an abattoir survey using sound methods, identifying and, where possible, minimizing biases. Copyright © 2017
选区激光熔化钴铬合金金-瓷结合强度初探%Metal-ceramic bond strength of Co-Cr alloy processed by selective laser melting
刘洁; 刘洋; 孙荣; 战德松; 王彦岩
2013-01-01
Objective To eveluate the metal-ceramic bond strength of a selective laser melting Co-Cr alloy.Methods Twelve Co-Cr metal bars were prepared according to the ISO 9693 standard with Vita porcelain fused onto the centre of each bar.Then the sample bars were devided into two groups of six each.The control group was made by traditional cast process(cast group),and the experimental group was processed by selective laser melting (SLM) technology (SLM group).Metal-ceramic bonding strength and fracture mode were assessed using three-point bending test.Fracture mode analysis was determined by scanning electronic microscope/energy dispersive spectroscopy.Student's t-test was used to analyze the data in SPSS 13.0.Results The metal-ceramic bond strength value of the cast group was (33.45 ±2.34) MPa,and that of the SLM group was (31.62 ± 2.34) MPa (t =0.79,P ＞ 0.05).A mixed fracture mode on the debonding interface of all specimens was observed,while little porcelain was reserved.Conclusions The metal-ceramic system processed by SLM exhibited a bonding strength that satisfies the requirement of clinical application.%目的 采用三点弯曲法评价选区激光熔化钴铬合金金-瓷结合强度,以期为修复临床提供参考.方法 依据ISO 9693标准,分别用铸造法(铸造组)和选区激光熔化法(选区激光熔化组)制作钴铬合金试件,每组6个,试件中间区域熔附瓷粉.三点弯曲法测试金-瓷结合强度,采用SPSS 13.0软件进行t检验,分析两组金-瓷结合强度差异；扫描电镜和能谱仪进行金-瓷结合界面分析.结果 铸造组和选区激光熔化组金-瓷结合强度分别为(33.45 ±2.34)和(31.62 ±2.34) MPa,两组差异无统计学意义(t =0.79,P＞0.05).两组试件断裂类型均为混合断裂,仅微量瓷残余.结论 选区激光熔化钴铬合金修复体可满足临床对金-瓷结合强度的要求.
Variation in acoustic signalling traits exhibits footprints of sexual selection.
Reinhold, Klaus
2011-03-01
Phenotypic variation is ubiquitous in nature and a precondition for adaptive evolution. However, theory predicts that the extent of phenotypic variation should decrease with increasing strength of selection on a trait. Comparative analyses of trait variability have repeatedly used this expectation to infer the type or strength of selection. Yet, the suggested influence of selection on trait variability has rarely been tested empirically. In the present study, I compare estimates of sexual selection strength and trait variability from published data. I constricted the analysis to acoustic courtship traits in amphibians and insects with known variability and corresponding results of female binary choice experiments on these traits. Trait variability and strength of sexual selection were significantly correlated, and both were correlated with signal duration. Because traits under stronger selection had lower variation even after the effect of signal duration was eliminated, I conclude that traces of the strength of selection can be observed with respect to variation of acoustic signaling traits in insects and amphibians. The analysis also shows that traits under stabilizing selection have significantly lower phenotypic variability than traits under directional selection.
Mancini Cecilia
2011-10-01
Full Text Available Abstract In the preparation of transgenic murine ES cells it is important to verify the construct has a single insertion, because an ectopic neomycin phosphortransferase positive selection cassette (NEO may cause a position effect. During a recent work, where a knockin SCA28 mouse was prepared, we developed two assays based on Real-Time PCR using both SYBR Green and specific minor groove binder (MGB probes to evaluate the copies of NEO using the comparative delta-delta Ct method versus the Rpp30 reference gene. We compared the results from Southern blot, routinely used to quantify NEO copies, with the two Real-Time PCR assays. Twenty-two clones containing the single NEO copy showed values of 0.98 ± 0.24 (mean ± 2 S.D., and were clearly distinguishable from clones with two or more NEO copies. This method was found to be useful, easy, sensitive and fast and could substitute for the widely used, but laborious Southern blot method.
V. Bhujanga Rao
1995-01-01
Full Text Available Flow-induced structural noise of a sonar dome in which the sonar transducer is housed, constitutes a major source of self-noise above a certain speed of the vessel. Excitation of the sonar dome structure by random pressure fluctuations in turbulent boundary layer flow leads to acoustic radiation into the interior of the dome. This acoustic radiation is termed flow-induced structural noise. Such noise contributes significantly to sonar self-noise of submerged vessels cruising at high speed and plays an important role in surface ships, torpedos, and towed sonars as well. Various turbulent boundary layer wall pressure models published were analyzed and the most suitable analytical model for the sonar dome application selected while taking into account high frequency, fluid loading, low wave number contribution, and pressure gradient effects. These investigations included type of coupling that exists between turbulent boundary layer pressure fluctuations and dome wall structure of a typical sonar dome. Comparison of theoretical data with measured data onboard a ship are also reported.
Mahmoud, E.; Takey, A.; Shoukry, A.
2016-07-01
We develop a galaxy cluster finding algorithm based on spectral clustering technique to identify optical counterparts and estimate optical redshifts for X-ray selected cluster candidates. As an application, we run our algorithm on a sample of X-ray cluster candidates selected from the third XMM-Newton serendipitous source catalog (3XMM-DR5) that are located in the Stripe 82 of the Sloan Digital Sky Survey (SDSS). Our method works on galaxies described in the color-magnitude feature space. We begin by examining 45 galaxy clusters with published spectroscopic redshifts in the range of 0.1-0.8 with a median of 0.36. As a result, we are able to identify their optical counterparts and estimate their photometric redshifts, which have a typical accuracy of 0.025 and agree with the published ones. Then, we investigate another 40 X-ray cluster candidates (from the same cluster survey) with no redshift information in the literature and found that 12 candidates are considered as galaxy clusters in the redshift range from 0.29 to 0.76 with a median of 0.57. These systems are newly discovered clusters in X-rays and optical data. Among them 7 clusters have spectroscopic redshifts for at least one member galaxy.
Mahmoud, Eman; Shoukry, Amin
2016-01-01
We develop a galaxy cluster finding algorithm based on spectral clustering technique to identify optical counterparts and estimate optical redshifts for X-ray selected cluster candidates. As an application, we run our algorithm on a sample of X-ray cluster candidates selected from the third XMM-Newton serendipitous source catalog (3XMM-DR5) that are located in the Stripe 82 of the Sloan Digital Sky Survey (SDSS). Our method works on galaxies described in the color-magnitude feature space. We begin by examining 45 galaxy clusters with published spectroscopic redshifts in the range of 0.1 to 0.8 with a median of 0.36. As a result, we are able to identify their optical counterparts and estimate their photometric redshifts, which have a typical accuracy of 0.025 and agree with the published ones. Then, we investigate another 40 X-ray cluster candidates (from the same cluster survey) with no redshift information in the literature and found that 12 candidates are considered as galaxy clusters in the redshift range ...
Humphreys, Keith; Blodgett, Janet C; Wagner, Todd H
2014-11-01
Observational studies of Alcoholics Anonymous' (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study, therefore, employed an innovative statistical technique to derive a selection bias-free estimate of AA's impact. Six data sets from 5 National Institutes of Health-funded randomized trials (1 with 2 independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol-dependent individuals in one of the data sets (n = 774) were analyzed separately from the rest of sample (n = 1,582 individuals pooled from 5 data sets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In 5 of the 6 data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = 0.38, p = 0.001) and 15-month (B = 0.42, p = 0.04) follow-up. However, in the remaining data set, in which preexisting AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. For most individuals seeking help for alcohol problems, increasing AA attendance leads to short- and long-term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high preexisting AA involvement, further increases in AA attendance may have little impact. Copyright © 2014 by the Research Society on Alcoholism.
Ferraz, J B; Johnson, R K
1993-04-01
Records from 2,495 litters and 14,605 Landrace and Large White pigs from two farms, but established from the same base population and run as replicated selection lines, were analyzed. Selection within herd was on estimated breeding values weighted by economic values. Animal models and REML procedures were used to estimate genetic, phenotypic, and environmental parameters for the number of pigs born alive (NBA), litter weight at 21 d (LW), average daily gain form approximately 30 to 104 kg (ADG), and backfat thickness adjusted to 104 kg (BF). Random animal genetic effects (o), permanent (NBA and LW) or litter (ADG and BF) environmental effects, maternal genetic effects (m), and the covariance between o and m were sequentially added to the model. Estimates of total heritability calculated from all data (ht2 = sigma o2 + 1/2 sigma m2 + 3/2 sigma om) ranged from .01 to .14 for NBA, from .18 to .22 for LW, from .23 to .34 for ADG, and from .40 to .50 for BF. Maternal genetic variance was from 2.4 to 3.8% of phenotypic variance in NBA, from 1.2 to 3.6% in LW, from .5 to 1.5% in ADG, and from 1.9 to 3.4% in BF. The correlation between o and m was -.07 for NBA, -.25 for LW, -.34 for ADG, and -.26 for BF. Permanent environmental effects explained from 16 to 17% of total phenotypic variation for NBA and from 1.6 to 5.3% for LW. Approximately 7% of the variation in ADG and 5% in BF was due to litter environmental effects.(ABSTRACT TRUNCATED AT 250 WORDS)
Prunier, Julien; Laroche, Jérôme; Beaulieu, Jean; Bousquet, Jean
2011-04-01
Outlier detection methods were used to scan the genome of the boreal conifer black spruce (Picea mariana [Mill.] B.S.P.) for gene single-nucleotide polymorphisms (SNPs) potentially involved in adaptations to temperature and precipitation variations. The scan involved 583 SNPs from 313 genes potentially playing adaptive roles. Differentiation estimates among population groups defined following variation in temperature and precipitation were moderately high for adaptive quantitative characters such as the timing of budset or tree height (Q(ST) = 0.189-0.314). Average differentiation estimates for gene SNPs were null, with F(ST) values of 0.005 and 0.006, respectively, among temperature and precipitation population groups. Using two detection approaches, a total of 26 SNPs from 25 genes distributed among 11 of the 12 linkage groups of black spruce were detected as outliers with F(ST) as high as 0.078. Nearly half of the outlier SNPs were located in exons and half of those were nonsynonymous. The functional annotations of genes carrying outlier SNPs and regression analyses between the frequencies of these SNPs and climatic variables supported their implication in adaptive processes. Several genes carrying outlier SNPs belonged to gene families previously found to harbour outlier SNPs in a reproductively isolated but largely sympatric congeneric species, suggesting differential subfunctionalization of gene duplicates. Selection coefficient estimates (S) were moderate but well above the magnitude of drift (>1/N(e)), indicating that the signature of natural selection could be detected at the nucleotide level despite the recent establishment of these populations during the Holocene. © 2011 Blackwell Publishing Ltd.
Watson, Kara M.; McHugh, Amy R.
2014-01-01
Regional regression equations were developed for estimating monthly flow-duration and monthly low-flow frequency statistics for ungaged streams in Coastal Plain and non-coastal regions of New Jersey for baseline and current land- and water-use conditions. The equations were developed to estimate 87 different streamflow statistics, which include the monthly 99-, 90-, 85-, 75-, 50-, and 25-percentile flow-durations of the minimum 1-day daily flow; the August–September 99-, 90-, and 75-percentile minimum 1-day daily flow; and the monthly 7-day, 10-year (M7D10Y) low-flow frequency. These 87 streamflow statistics were computed for 41 continuous-record streamflow-gaging stations (streamgages) with 20 or more years of record and 167 low-flow partial-record stations in New Jersey with 10 or more streamflow measurements. The regression analyses used to develop equations to estimate selected streamflow statistics were performed by testing the relation between flow-duration statistics and low-flow frequency statistics for 32 basin characteristics (physical characteristics, land use, surficial geology, and climate) at the 41 streamgages and 167 low-flow partial-record stations. The regression analyses determined drainage area, soil permeability, average April precipitation, average June precipitation, and percent storage (water bodies and wetlands) were the significant explanatory variables for estimating the selected flow-duration and low-flow frequency statistics. Streamflow estimates were computed for two land- and water-use conditions in New Jersey—land- and water-use during the baseline period of record (defined as the years a streamgage had little to no change in development and water use) and current land- and water-use conditions (1989–2008)—for each selected station using data collected through water year 2008. The baseline period of record is representative of a period when the basin was unaffected by change in development. The current period is
Flexural strength and the probability of failure of cold isostatic pressed zirconia core ceramics.
Siarampi, Eleni; Kontonasaki, Eleana; Papadopoulou, Lambrini; Kantiranis, Nikolaos; Zorba, Triantafillia; Paraskevopoulos, Konstantinos M; Koidis, Petros
2012-08-01
The flexural strength of zirconia core ceramics must predictably withstand the high stresses developed during oral function. The in-depth interpretation of strength parameters and the probability of failure during clinical performance could assist the clinician in selecting the optimum materials while planning treatment. The purpose of this study was to evaluate the flexural strength based on survival probability and Weibull statistical analysis of 2 zirconia cores for ceramic restorations. Twenty bar-shaped specimens were milled from 2 core ceramics, IPS e.max ZirCAD and Wieland ZENO Zr, and were loaded until fracture according to ISO 6872 (3-point bending test). An independent samples t test was used to assess significant differences of fracture strength (α=.05). Weibull statistical analysis of the flexural strength data provided 2 parameter estimates: Weibull modulus (m) and characteristic strength (σ(0)). The fractured surfaces of the specimens were evaluated by scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS). The investigation of the crystallographic state of the materials was performed with x-ray diffraction analysis (XRD) and Fourier transform infrared (FTIR) spectroscopy. Higher mean flexural strength (Pstrength of WZ ceramics was associated with a lower m and more voids in their microstructure. These findings suggest a greater scattering of strength values and a flaw distribution that are expected to increase failure probability. Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
FATIGUE STRENGTH OF HIGH-STRENGTH STEEL,
coldhardened by deforming to 83%. It was found that it has low static notch sensitivity (lower than that of heat-treated steels), that static strength ...is raised appreciably by increased cold plastic deformation, and that its fatigue strength is raised substantially by mechanical polishing. (Author)
The spatial patterns of directional phenotypic selection
Siepielski, Adam M.
2013-09-12
Local adaptation, adaptive population divergence and speciation are often expected to result from populations evolving in response to spatial variation in selection. Yet, we lack a comprehensive understanding of the major features that characterise the spatial patterns of selection, namely the extent of variation among populations in the strength and direction of selection. Here, we analyse a data set of spatially replicated studies of directional phenotypic selection from natural populations. The data set includes 60 studies, consisting of 3937 estimates of selection across an average of five populations. We performed meta-analyses to explore features characterising spatial variation in directional selection. We found that selection tends to vary mainly in strength and less in direction among populations. Although differences in the direction of selection occur among populations they do so where selection is often weakest, which may limit the potential for ongoing adaptive population divergence. Overall, we also found that spatial variation in selection appears comparable to temporal (annual) variation in selection within populations; however, several deficiencies in available data currently complicate this comparison. We discuss future research needs to further advance our understanding of spatial variation in selection. © 2013 John Wiley & Sons Ltd/CNRS.
Assessment of Shear Strength in Silty Soils
Stefaniak Katarzyna
2015-06-01
Full Text Available The article presents a comparison of shear strength values in silty soils from the area of Poznań, determined based on selected Nkt values recommended in literature, with values of shear strength established on the basis of Nkt values recommended by the author. Analysed silty soils are characterized by the carbonate cementation zone, which made it possible to compare selected empirical coefficients both in normally consolidated and overconsolidated soils
Design methodology of the strength properties of medical knitted meshes
Mikołajczyk, Z.; Walkowska, A.
2016-07-01
One of the most important utility properties of medical knitted meshes intended for hernia and urological treatment is their bidirectional strength along the courses and wales. The value of this parameter, expected by the manufacturers and surgeons, is estimated at 100 N per 5 cm of the sample width. The most frequently, these meshes are produced on the basis of single- or double-guide stitches. They are made of polypropylene and polyester monofilament yarns with the diameter in the range from 0.6 to 1.2 mm, characterized by a high medical purity. The aim of the study was to develop the design methodology of meshes strength based on the geometrical construction of the stitch and strength of yarn. In the environment of the ProCAD warpknit 5 software the simulated stretching process of meshes together with an analysis of their geometry changes was carried out. Simulations were made for four selected representative stitches. Both on a built, unique measuring position and on the tensile testing machine the real parameters of the loops geometry of meshes were measured. Model of mechanical stretching of warp-knitted meshes along the courses and wales was developed. The thesis argument was made, that the force that breaks the loop of warp-knitted fabric is the lowest value of breaking forces of loop link yarns or yarns that create straight sections of loop. This thesis was associate with the theory of strength that uses the “the weakest link concept”. Experimental verification of model was carried out for the basic structure of the single-guide mesh. It has been shown that the real, relative strength of the mesh related to one course is equal to the strength of the yarn breakage in a loop, while the strength along the wales is close to breaking strength of a single yarn. In relation to the specific construction of the medical mesh, based on the knowledge of the density of the loops structure, the a-jour mesh geometry and the yarns strength, it is possible, with high
Predicting of the compressive strength of RCA concrete
Jaskulski Roman
2017-01-01
Full Text Available The paper presents the results of predicting the strength of 61 concretes made with the use of recycled concrete aggregate (RCA. Five models in the form of first-order polynomials containing two to six variables characterizing the composition of the mixture were formulated for this purpose. Factors for unknowns were selected using linear regression in two variants: with and without additional coefficient. For each model, the average absolute error of the concrete strength estimation was determined. Because of the various consequences of underestimation and overestimation of the results, the analysis of models quality was carried out with the distinction of the two cases. The results indicate that the key to improving the quality of models is to take into account the quality of the aggregate expressed by the ACV parameter. Better match results were also obtained for models with more variables and the additional coefficient.
Curran, Christopher A.; Eng, Ken; Konrad, Christopher P.
2012-01-01
A regional low-flow survey of small, perennial streams in western Washington was initiated by the Northwest Indian Fisheries Commission (NWIFC), NWIFC-member tribes, and Point-No-Point Treaty Council in cooperation with the U.S. Geological Survey in 2007 and repeated by the tribes during the low-flow seasons of 2008–09. Low-flow measurements at 63 partial-record and miscellaneous streamflow-measurement sites during surveys in 2007–09 are used with concurrent flows at continuous streamflow-gaging stations (index sites) within the U.S. Geological Survey network to estimate the low-flow metric Q7,10 at each measurement site (Q7,10 is defined as the lowest average streamflow for a consecutive 7-day period that recurs on average once every 10 years). Index-site correlation methods for estimating low-flow characteristics at partial-record sites are reviewed and an empirical Monte Carlo technique is used with the daily streamflow record at 43 index sites to determine the error and bias associated with estimating the Q7,10 at synthetic partial-record sites using three methods: Q-ratio, MOVE.1, and Base-Flow Correlation. The Q-ratio method generally has the lowest error and least amount of bias for 170 scenarios, with each scenario defined by the number of concurrent flow measurements between the partial-record and index sites (ranging from 4 to 20) and the combination of basin attributes used to select the index site. The root-mean square error for the Q-ratio method ranged from 70 to 118 percent, depending on the scenario. The scenario with the smallest root-mean square error used four concurrent flow measurements and the basin attributes: basin area, mean annual precipitation, and base-flow recession time constant, also referred to as tau (τ).
Tomková, Jana; Švidrnoch, Martin; Maier, Vítězslav; Ondra, Peter
2017-03-07
A new ultra high performance liquid chromatography with electrospray ionization time-of-flight mass spectrometry method for the selective and sensitive separation, identification and determination of selected designer benzodiazepines (namely, pyrazolam, phenazepam, etizolam, flubromazepam, diclazepam, deschloroetizolam, bentazepam, nimetazepam and flubromazolam) in human serum was developed. The separation of the studied designer benzodiazepines was achieved on C18 chromatographic column using gradient elution within 6 min without any significant matrix interferences. Liquid-liquid extraction with butyl acetate was applied for serum samples clean-up and preconcentration of studied designer benzodiazepines. The method was validated in terms of linearity, limit of detection, limit of quantification, matrix effects, specificity, precision, accuracy, recovery and sample stability. The limit of detection values were in range 0.10-0.15 ng/mL. The method was applied on spiked serum sample to demonstrate its applicability for systematic toxicology analysis. Furthermore, a capillary chromatographic method with micellar electrokinetic chromatography was used for the estimation of partition coefficients of studied designer benzodiazepines as important parameters to evaluate their pharmacological and toxicological properties. This article is protected by copyright. All rights reserved.
Strength and Balance Exercises
... Peripheral Artery Disease Venous Thromboembolism Aortic Aneurysm More Strength and Balance Exercises Updated:Sep 8,2016 If ... Be Safe While Being Active - Stretching & Flexibility Exercises - Strength & Balance Exercises - Problems & Solutions for Being Active - FAQs ...