Modeling hypoxia in the Chesapeake Bay: Ensemble estimation using a Bayesian hierarchical model
Stow, Craig A.; Scavia, Donald
2009-02-01
Quantifying parameter and prediction uncertainty in a rigorous framework can be an important component of model skill assessment. Generally, models with lower uncertainty will be more useful for prediction and inference than models with higher uncertainty. Ensemble estimation, an idea with deep roots in the Bayesian literature, can be useful to reduce model uncertainty. It is based on the idea that simultaneously estimating common or similar parameters among models can result in more precise estimates. We demonstrate this approach using the Streeter-Phelps dissolved oxygen sag model fit to 29 years of data from Chesapeake Bay. Chesapeake Bay has a long history of bottom water hypoxia and several models are being used to assist management decision-making in this system. The Bayesian framework is particularly useful in a decision context because it can combine both expert-judgment and rigorous parameter estimation to yield model forecasts and a probabilistic estimate of the forecast uncertainty.
Eadie, Gwendolyn; Harris, William
2016-01-01
We present a hierarchical Bayesian method for estimating the total mass and mass profile of the Milky Way Galaxy. The new hierarchical Bayesian approach further improves the framework presented by Eadie, Harris, & Widrow (2015) and Eadie & Harris (2016) and builds upon the preliminary reports by Eadie et al (2015a,c). The method uses a distribution function $f(\\mathcal{E},L)$ to model the galaxy and kinematic data from satellite objects such as globular clusters to trace the Galaxy's gravitational potential. A major advantage of the method is that it not only includes complete and incomplete data simultaneously in the analysis, but also incorporates measurement uncertainties in a coherent and meaningful way. We first test the hierarchical Bayesian framework, which includes measurement uncertainties, using the same data and power-law model assumed in Eadie & Harris (2016), and find the results are similar but more strongly constrained. Next, we take advantage of the new statistical framework and in...
Eadie, Gwendolyn M.; Springford, Aaron; Harris, William E.
2017-02-01
We present a hierarchical Bayesian method for estimating the total mass and mass profile of the Milky Way Galaxy. The new hierarchical Bayesian approach further improves the framework presented by Eadie et al. and Eadie and Harris and builds upon the preliminary reports by Eadie et al. The method uses a distribution function f({ E },L) to model the Galaxy and kinematic data from satellite objects, such as globular clusters (GCs), to trace the Galaxy’s gravitational potential. A major advantage of the method is that it not only includes complete and incomplete data simultaneously in the analysis, but also incorporates measurement uncertainties in a coherent and meaningful way. We first test the hierarchical Bayesian framework, which includes measurement uncertainties, using the same data and power-law model assumed in Eadie and Harris and find the results are similar but more strongly constrained. Next, we take advantage of the new statistical framework and incorporate all possible GC data, finding a cumulative mass profile with Bayesian credible regions. This profile implies a mass within 125 kpc of 4.8× {10}11{M}ȯ with a 95% Bayesian credible region of (4.0{--}5.8)× {10}11{M}ȯ . Our results also provide estimates of the true specific energies of all the GCs. By comparing these estimated energies to the measured energies of GCs with complete velocity measurements, we observe that (the few) remote tracers with complete measurements may play a large role in determining a total mass estimate of the Galaxy. Thus, our study stresses the need for more remote tracers with complete velocity measurements.
Hierarchical Bayes Ensemble Kalman Filtering
Tsyrulnikov, Michael
2015-01-01
Ensemble Kalman filtering (EnKF), when applied to high-dimensional systems, suffers from an inevitably small affordable ensemble size, which results in poor estimates of the background error covariance matrix ${\\bf B}$. The common remedy is a kind of regularization, usually an ad-hoc spatial covariance localization (tapering) combined with artificial covariance inflation. Instead of using an ad-hoc regularization, we adopt the idea by Myrseth and Omre (2010) and explicitly admit that the ${\\bf B}$ matrix is unknown and random and estimate it along with the state (${\\bf x}$) in an optimal hierarchical Bayes analysis scheme. We separate forecast errors into predictability errors (i.e. forecast errors due to uncertainties in the initial data) and model errors (forecast errors due to imperfections in the forecast model) and include the two respective components ${\\bf P}$ and ${\\bf Q}$ of the ${\\bf B}$ matrix into the extended control vector $({\\bf x},{\\bf P},{\\bf Q})$. Similarly, we break the traditional backgrou...
Classification using Hierarchical Naive Bayes models
DEFF Research Database (Denmark)
Langseth, Helge; Dyhre Nielsen, Thomas
2006-01-01
Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well-performing set of classifiers is the Naïve Bayes models. However, an inherent problem with these classifiers is the assumption that all attributes used to describe...... an instance are conditionally independent given the class of that instance. When this assumption is violated (which is often the case in practice) it can reduce classification accuracy due to “information double-counting” and interaction omission. In this paper we focus on a relatively new set of models......, termed Hierarchical Naïve Bayes models. Hierarchical Naïve Bayes models extend the modeling flexibility of Naïve Bayes models by introducing latent variables to relax some of the independence statements in these models. We propose a simple algorithm for learning Hierarchical Naïve Bayes models...
A Hierarchical Bayes Ensemble Kalman Filter
Tsyrulnikov, Michael; Rakitko, Alexander
2017-01-01
A new ensemble filter that allows for the uncertainty in the prior distribution is proposed and tested. The filter relies on the conditional Gaussian distribution of the state given the model-error and predictability-error covariance matrices. The latter are treated as random matrices and updated in a hierarchical Bayes scheme along with the state. The (hyper)prior distribution of the covariance matrices is assumed to be inverse Wishart. The new Hierarchical Bayes Ensemble Filter (HBEF) assimilates ensemble members as generalized observations and allows ordinary observations to influence the covariances. The actual probability distribution of the ensemble members is allowed to be different from the true one. An approximation that leads to a practicable analysis algorithm is proposed. The new filter is studied in numerical experiments with a doubly stochastic one-variable model of "truth". The model permits the assessment of the variance of the truth and the true filtering error variance at each time instance. The HBEF is shown to outperform the EnKF and the HEnKF by Myrseth and Omre (2010) in a wide range of filtering regimes in terms of performance of its primary and secondary filters.
Hierarchical Bayes Ensemble Kalman Filter for geophysical data assimilation
Tsyrulnikov, Michael; Rakitko, Alexander
2016-04-01
In the Ensemble Kalman Filter (EnKF), the forecast error covariance matrix B is estimated from a sample (ensemble), which inevitably implies a degree of uncertainty. This uncertainty is especially large in high dimensions, where the affordable ensemble size is orders of magnitude less than the dimensionality of the system. Common remedies include ad-hoc devices like variance inflation and covariance localization. The goal of this study is to optimize the account for the inherent uncertainty of the B matrix in EnKF. Following the idea by Myrseth and Omre (2010), we explicitly admit that the B matrix is unknown and random and estimate it along with the state (x) in an optimal hierarchical Bayes analysis scheme. We separate forecast errors into predictability errors (i.e. forecast errors due to uncertainties in the initial data) and model errors (forecast errors due to imperfections in the forecast model) and include the two respective components P and Q of the B matrix into the extended control vector (x,P,Q). Similarly, we break the traditional forecast ensemble into the predictability-error related ensemble and model-error related ensemble. The reason for the separation of model errors from predictability errors is the fundamental difference between the two sources of error. Model error are external (i.e. do not depend on the filter's performance) whereas predictability errors are internal to a filter (i.e. are determined by the filter's behavior). At the analysis step, we specify Inverse Wishart based priors for the random matrices P and Q and conditionally Gaussian prior for the state x. Then, we update the prior distribution of (x,P,Q) using both observation and ensemble data, so that ensemble members are used as generalized observations and ordinary observations are allowed to influence the covariances. We show that for linear dynamics and linear observation operators, conditional Gaussianity of the state is preserved in the course of filtering. At the forecast
Noma, Hisashi; Matsui, Shigeyuki
2013-05-20
The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression.
A Hierarchical Framework for Facial Age Estimation
Directory of Open Access Journals (Sweden)
Yuyu Liang
2014-01-01
Full Text Available Age estimation is a complex issue of multiclassification or regression. To address the problems of uneven distribution of age database and ignorance of ordinal information, this paper shows a hierarchic age estimation system, comprising age group and specific age estimation. In our system, two novel classifiers, sequence k-nearest neighbor (SKNN and ranking-KNN, are introduced to predict age group and value, respectively. Notably, ranking-KNN utilizes the ordinal information between samples in estimation process rather than regards samples as separate individuals. Tested on FG-NET database, our system achieves 4.97 evaluated by MAE (mean absolute error for age estimation.
Hierarchical Boltzmann simulations and model error estimation
Torrilhon, Manuel; Sarna, Neeraj
2017-08-01
A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.
Directory of Open Access Journals (Sweden)
Takebayashi Naoki
2007-07-01
Full Text Available Abstract Background Although testing for simultaneous divergence (vicariance across different population-pairs that span the same barrier to gene flow is of central importance to evolutionary biology, researchers often equate the gene tree and population/species tree thereby ignoring stochastic coalescent variance in their conclusions of temporal incongruence. In contrast to other available phylogeographic software packages, msBayes is the only one that analyses data from multiple species/population pairs under a hierarchical model. Results msBayes employs approximate Bayesian computation (ABC under a hierarchical coalescent model to test for simultaneous divergence (TSD in multiple co-distributed population-pairs. Simultaneous isolation is tested by estimating three hyper-parameters that characterize the degree of variability in divergence times across co-distributed population pairs while allowing for variation in various within population-pair demographic parameters (sub-parameters that can affect the coalescent. msBayes is a software package consisting of several C and R programs that are run with a Perl "front-end". Conclusion The method reasonably distinguishes simultaneous isolation from temporal incongruence in the divergence of co-distributed population pairs, even with sparse sampling of individuals. Because the estimate step is decoupled from the simulation step, one can rapidly evaluate different ABC acceptance/rejection conditions and the choice of summary statistics. Given the complex and idiosyncratic nature of testing multi-species biogeographic hypotheses, we envision msBayes as a powerful and flexible tool for tackling a wide array of difficult research questions that use population genetic data from multiple co-distributed species. The msBayes pipeline is available for download at http://msbayes.sourceforge.net/ under an open source license (GNU Public License. The msBayes pipeline is comprised of several C and R programs that
A hierarchical estimator development for estimation of tire-road friction coefficient
Zhang, Xudong; Göhlich, Dietmar
2017-01-01
The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified “magic formula” tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method. PMID:28178332
On Bayes linear unbiased estimation of estimable functions for the singular linear model
Institute of Scientific and Technical Information of China (English)
ZHANG Weiping; WEI Laisheng
2005-01-01
The unique Bayes linear unbiased estimator (Bayes LUE) of estimable functions is derived for the singular linear model. The superiority of Bayes LUE over ordinary best linear unbiased estimator is investigated under mean square error matrix (MSEM)criterion.
Hierarchical state-space estimation of leatherback turtle navigation ability.
Mills Flemming, Joanna; Jonsen, Ian D; Myers, Ransom A; Field, Christopher A
2010-12-28
Remotely sensed tracking technology has revealed remarkable migration patterns that were previously unknown; however, models to optimally use such data have developed more slowly. Here, we present a hierarchical Bayes state-space framework that allows us to combine tracking data from a collection of animals and make inferences at both individual and broader levels. We formulate models that allow the navigation ability of animals to be estimated and demonstrate how information can be combined over many animals to allow improved estimation. We also show how formal hypothesis testing regarding navigation ability can easily be accomplished in this framework. Using Argos satellite tracking data from 14 leatherback turtles, 7 males and 7 females, during their southward migration from Nova Scotia, Canada, we find that the circle of confusion (the radius around an animal's location within which it is unable to determine its location precisely) is approximately 96 km. This estimate suggests that the turtles' navigation does not need to be highly accurate, especially if they are able to use more reliable cues as they near their destination. Moreover, for the 14 turtles examined, there is little evidence to suggest that male and female navigation abilities differ. Because of the minimal assumptions made about the movement process, our approach can be used to estimate and compare navigation ability for many migratory species that are able to carry electronic tracking devices.
Estimation of Response Functions Based on Variational Bayes Algorithm in Dynamic Images Sequences
Directory of Open Access Journals (Sweden)
Bowei Shan
2016-01-01
Full Text Available We proposed a nonparametric Bayesian model based on variational Bayes algorithm to estimate the response functions in dynamic medical imaging. In dynamic renal scintigraphy, the impulse response or retention functions are rather complicated and finding a suitable parametric form is problematic. In this paper, we estimated the response functions using nonparametric Bayesian priors. These priors were designed to favor desirable properties of the functions, such as sparsity or smoothness. These assumptions were used within hierarchical priors of the variational Bayes algorithm. We performed our algorithm on the real online dataset of dynamic renal scintigraphy. The results demonstrated that this algorithm improved the estimation of response functions with nonparametric priors.
Bayes and empirical Bayes iteration estimators in two seemingly unrelated regression equations
Institute of Scientific and Technical Information of China (English)
WANG; Lichun
2005-01-01
For a system of two seemingly unrelated regression equations given by {y1=X1β+ε1,y2=X2γ+ε2, (y1 is an m × 1 vector and y2 is an n × 1 vector, m≠ n), employing the covariance adjusted technique, we propose the parametric Bayes and empirical Bayes iteration estimator sequences for regression coefficients. We prove that both the covariance matrices converge monotonically and the Bayes iteration estimator squence is consistent as well. Based on the mean square error (MSE) criterion, we elaborate the superiority of empirical Bayes iteration estimator over the Bayes estimator of single equation when the covariance matrix of errors is unknown. The results obtained in this paper further show the power of the covariance adjusted approach.
Empirical Bayes Estimation in the Rasch Model: A Simulation.
de Gruijter, Dato N. M.
In a situation where the population distribution of latent trait scores can be estimated, the ordinary maximum likelihood estimator of latent trait scores may be improved upon by taking the estimated population distribution into account. In this paper empirical Bayes estimators are compared with the liklihood estimator for three samples of 300…
New aerial survey and hierarchical model to estimate manatee abundance
Langimm, Cahterine A.; Dorazio, Robert M.; Stith, Bradley M.; Doyle, Terry J.
2011-01-01
Monitoring the response of endangered and protected species to hydrological restoration is a major component of the adaptive management framework of the Comprehensive Everglades Restoration Plan. The endangered Florida manatee (Trichechus manatus latirostris) lives at the marine-freshwater interface in southwest Florida and is likely to be affected by hydrologic restoration. To provide managers with prerestoration information on distribution and abundance for postrestoration comparison, we developed and implemented a new aerial survey design and hierarchical statistical model to estimate and map abundance of manatees as a function of patch-specific habitat characteristics, indicative of manatee requirements for offshore forage (seagrass), inland fresh drinking water, and warm-water winter refuge. We estimated the number of groups of manatees from dual-observer counts and estimated the number of individuals within groups by removal sampling. Our model is unique in that we jointly analyzed group and individual counts using assumptions that allow probabilities of group detection to depend on group size. Ours is the first analysis of manatee aerial surveys to model spatial and temporal abundance of manatees in association with habitat type while accounting for imperfect detection. We conducted the study in the Ten Thousand Islands area of southwestern Florida, USA, which was expected to be affected by the Picayune Strand Restoration Project to restore hydrology altered for a failed real-estate development. We conducted 11 surveys in 2006, spanning the cold, dry season and warm, wet season. To examine short-term and seasonal changes in distribution we flew paired surveys 1–2 days apart within a given month during the year. Manatees were sparsely distributed across the landscape in small groups. Probability of detection of a group increased with group size; the magnitude of the relationship between group size and detection probability varied among surveys. Probability
Bayes Estimation for Inverse Rayleigh Model under Different Loss Functions
Directory of Open Access Journals (Sweden)
Guobing Fan
2015-04-01
Full Text Available The inverse Rayleigh distribution plays an important role in life test and reliability domain. The aim of this article is study the Bayes estimation of parameter of inverse Rayleigh distribution. Bayes estimators are obtained under squared error loss, LINEX loss and entropy loss functions on the basis of quasi-prior distribution. Comparisons in terms of risks with the estimators of parameter under three loss functions are also studied. Finally, a numerical example is used to illustrate the results.
Application of Bayesian Hierarchical Prior Modeling to Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Shutin, Dmitriy
2012-01-01
. The estimators result as an application of the variational message-passing algorithm on the factor graph representing the signal model extended with the hierarchical prior models. Numerical results demonstrate the superior performance of our channel estimators as compared to traditional and state......Existing methods for sparse channel estimation typically provide an estimate computed as the solution maximizing an objective function defined as the sum of the log-likelihood function and a penalization term proportional to the l1-norm of the parameter of interest. However, other penalization......-of-the-art sparse methods....
Bayesian hierarchical grouping: Perceptual grouping as mixture estimation.
Froyen, Vicky; Feldman, Jacob; Singh, Manish
2015-10-01
We propose a novel framework for perceptual grouping based on the idea of mixture models, called Bayesian hierarchical grouping (BHG). In BHG, we assume that the configuration of image elements is generated by a mixture of distinct objects, each of which generates image elements according to some generative assumptions. Grouping, in this framework, means estimating the number and the parameters of the mixture components that generated the image, including estimating which image elements are "owned" by which objects. We present a tractable implementation of the framework, based on the hierarchical clustering approach of Heller and Ghahramani (2005). We illustrate it with examples drawn from a number of classical perceptual grouping problems, including dot clustering, contour integration, and part decomposition. Our approach yields an intuitive hierarchical representation of image elements, giving an explicit decomposition of the image into mixture components, along with estimates of the probability of various candidate decompositions. We show that BHG accounts well for a diverse range of empirical data drawn from the literature. Because BHG provides a principled quantification of the plausibility of grouping interpretations over a wide range of grouping problems, we argue that it provides an appealing unifying account of the elusive Gestalt notion of Prägnanz.
A Numerical Empirical Bayes Procedure for Finding an Interval Estimate.
Lord, Frederic M.
A numerical procedure is outlined for obtaining an interval estimate of a parameter in an empirical Bayes estimation problem. The case where each observed value x has a binomial distribution, conditional on a parameter zeta, is the only case considered. For each x, the parameter estimated is the expected value of zeta given x. The main purpose is…
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Directory of Open Access Journals (Sweden)
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Learning curve estimation in medical devices and procedures: hierarchical modeling.
Govindarajulu, Usha S; Stillo, Marco; Goldfarb, David; Matheny, Michael E; Resnic, Frederic S
2017-07-30
In the use of medical device procedures, learning effects have been shown to be a critical component of medical device safety surveillance. To support their estimation of these effects, we evaluated multiple methods for modeling these rates within a complex simulated dataset representing patients treated by physicians clustered within institutions. We employed unique modeling for the learning curves to incorporate the learning hierarchy between institution and physicians and then modeled them within established methods that work with hierarchical data such as generalized estimating equations (GEE) and generalized linear mixed effect models. We found that both methods performed well, but that the GEE may have some advantages over the generalized linear mixed effect models for ease of modeling and a substantially lower rate of model convergence failures. We then focused more on using GEE and performed a separate simulation to vary the shape of the learning curve as well as employed various smoothing methods to the plots. We concluded that while both hierarchical methods can be used with our mathematical modeling of the learning curve, the GEE tended to perform better across multiple simulated scenarios in order to accurately model the learning effect as a function of physician and hospital hierarchical data in the use of a novel medical device. We found that the choice of shape used to produce the 'learning-free' dataset would be dataset specific, while the choice of smoothing method was negligibly different from one another. This was an important application to understand how best to fit this unique learning curve function for hierarchical physician and hospital data. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Flexible distributions for triple-goal estimates in two-stage hierarchical models
Paddock, Susan M.; Ridgeway, Greg; Lin, Rongheng; Louis, Thomas A.
2009-01-01
Performance evaluations often aim to achieve goals such as obtaining estimates of unit-specific means, ranks, and the distribution of unit-specific parameters. The Bayesian approach provides a powerful way to structure models for achieving these goals. While no single estimate can be optimal for achieving all three inferential goals, the communication and credibility of results will be enhanced by reporting a single estimate that performs well for all three. Triple goal estimates [Shen and Louis, 1998. Triple-goal estimates in two-stage hierarchical models. J. Roy. Statist. Soc. Ser. B 60, 455–471] have this performance and are appealing for performance evaluations. Because triple-goal estimates rely more heavily on the entire distribution than do posterior means, they are more sensitive to misspecification of the population distribution and we present various strategies to robustify triple-goal estimates by using nonparametric distributions. We evaluate performance based on the correctness and efficiency of the robustified estimates under several scenarios and compare empirical Bayes and fully Bayesian approaches to model the population distribution. We find that when data are quite informative, conclusions are robust to model misspecification. However, with less information in the data, conclusions can be quite sensitive to the choice of population distribution. Generally, use of a nonparametric distribution pays very little in efficiency when a parametric population distribution is valid, but successfully protects against model misspecification. PMID:19603088
Estimation of Extreme Marine Hydrodynamic Variables in Western Laizhou Bay
Institute of Scientific and Technical Information of China (English)
DAI Yanchen; QIAO Lulu; XU Jishang; ZHOU Chunyan; DING Dong; BI Wei
2015-01-01
Laizhou Bay and its adjacent waters are of great importance to China's marine oil and gas development. It is therefore crucial to estimate return-period values of marine environmental variables in this region to ensure the safety and success of maritime engineering and maritime exploration. In this study, we used numerical simulations to estimate extreme wave height, sea current velocity and sea-level height in western Laizhou Bay. The results show that the sea-level rise starts at the mouth of the bay, increases toward west/southwest, and reaches its maximum in the deepest basin of the bay. The 100-year return-period values of sea level rise can reach 3.4–4.0m in the western bay. The elevation of the western part of the Qingdong Oil Field would remain above the sea surface during extreme low sea level, while the rest of the oil field would be 1.6–2.4m below the sea surface. The return-period value of wave height is strongly affected by water depth; in fact, its spatial distribution is similar to the isobath's. The 100-year return-period values of effective wave height can be 6m or higher in the central bay and be more than 1 m in the shallow water near shore. The 100-year return-period values of current velocity is about 1.2–1.8ms-1 in the Qingdong Oil Field. These results provide scientific basis for ensuring construction safety and reducing construction cost.
A Hierarchical NeuroBayes-based Algorithm for Full Reconstruction of B Mesons at B Factories
Feindt, Michael; Kreps, Michal; Kuhr, Thomas; Neubauer, Sebastian; Zander, Daniel; Zupanc, Anze
2011-01-01
We describe a new B-meson full reconstruction algorithm designed for the Belle experiment at the B-factory KEKB, an asymmetric e+e- collider. To maximize the number of reconstructed B decay channels, it utilizes a hierarchical reconstruction procedure and probabilistic calculus instead of classical selection cuts. The multivariate analysis package NeuroBayes was used extensively to hold the balance between highest possible efficiency, robustness and acceptable CPU time consumption. In total, 1042 exclusive decay channels were reconstructed, employing 71 neural networks altogether. Overall, we correctly reconstruct one B+/- or B0 candidate in 0.3% or 0.2% of the BBbar events, respectively. This is an improvement in efficiency by roughly a factor of 2, depending on the analysis considered, compared to the cut-based classical reconstruction algorithm used at Belle. The new framework also features the ability to choose the desired purity or efficiency of the fully reconstructed sample. If the same purity as for t...
Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data.
Dorazio, Robert M
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar - and often identical - inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.
Directory of Open Access Journals (Sweden)
Piergiorgi Paolo
2006-11-01
Full Text Available Abstract Background Uncertainty often affects molecular biology experiments and data for different reasons. Heterogeneity of gene or protein expression within the same tumor tissue is an example of biological uncertainty which should be taken into account when molecular markers are used in decision making. Tissue Microarray (TMA experiments allow for large scale profiling of tissue biopsies, investigating protein patterns characterizing specific disease states. TMA studies deal with multiple sampling of the same patient, and therefore with multiple measurements of same protein target, to account for possible biological heterogeneity. The aim of this paper is to provide and validate a classification model taking into consideration the uncertainty associated with measuring replicate samples. Results We propose an extension of the well-known Naïve Bayes classifier, which accounts for biological heterogeneity in a probabilistic framework, relying on Bayesian hierarchical models. The model, which can be efficiently learned from the training dataset, exploits a closed-form of classification equation, thus providing no additional computational cost with respect to the standard Naïve Bayes classifier. We validated the approach on several simulated datasets comparing its performances with the Naïve Bayes classifier. Moreover, we demonstrated that explicitly dealing with heterogeneity can improve classification accuracy on a TMA prostate cancer dataset. Conclusion The proposed Hierarchical Naïve Bayes classifier can be conveniently applied in problems where within sample heterogeneity must be taken into account, such as TMA experiments and biological contexts where several measurements (replicates are available for the same biological sample. The performance of the new approach is better than the standard Naïve Bayes model, in particular when the within sample heterogeneity is different in the different classes.
Estimating methane emissions from biological and fossil-fuel sources in the San Francisco Bay Area
Jeong, Seongeun; Cui, Xinguang; Blake, Donald R.; Miller, Ben; Montzka, Stephen A.; Andrews, Arlyn; Guha, Abhinav; Martien, Philip; Bambha, Ray P.; LaFranchi, Brian; Michelsen, Hope A.; Clements, Craig B.; Glaize, Pierre; Fischer, Marc L.
2017-01-01
We present the first sector-specific analysis of methane (CH4) emissions from the San Francisco Bay Area (SFBA) using CH4 and volatile organic compound (VOC) measurements from six sites during September - December 2015. We apply a hierarchical Bayesian inversion to separate the biological from fossil-fuel (natural gas and petroleum) sources using the measurements of CH4 and selected VOCs, a source-specific 1 km CH4 emission model, and an atmospheric transport model. We estimate that SFBA CH4 emissions are 166-289 Gg CH4/yr (at 95% confidence), 1.3-2.3 times higher than a recent inventory with much of the underestimation from landfill. Including the VOCs, 82 ± 27% of total posterior median CH4 emissions are biological and 17 ± 3% fossil fuel, where landfill and natural gas dominate the biological and fossil-fuel CH4 of prior emissions, respectively.
A Hierarchical Clustering Methodology for the Estimation of Toxicity
A Quantitative Structure Activity Relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural sim...
Hierarchical set of models to estimate soil thermal diffusivity
Arkhangelskaya, Tatiana; Lukyashchenko, Ksenia
2016-04-01
Soil thermal properties significantly affect the land-atmosphere heat exchange rates. Intra-soil heat fluxes depend both on temperature gradients and soil thermal conductivity. Soil temperature changes due to energy fluxes are determined by soil specific heat. Thermal diffusivity is equal to thermal conductivity divided by volumetric specific heat and reflects both the soil ability to transfer heat and its ability to change temperature when heat is supplied or withdrawn. The higher soil thermal diffusivity is, the thicker is the soil/ground layer in which diurnal and seasonal temperature fluctuations are registered and the smaller are the temperature fluctuations at the soil surface. Thermal diffusivity vs. moisture dependencies for loams, sands and clays of the East European Plain were obtained using the unsteady-state method. Thermal diffusivity of different soils differed greatly, and for a given soil it could vary by 2, 3 or even 5 times depending on soil moisture. The shapes of thermal diffusivity vs. moisture dependencies were different: peak curves were typical for sandy soils and sigmoid curves were typical for loamy and especially for compacted soils. The lowest thermal diffusivities and the smallest range of their variability with soil moisture were obtained for clays with high humus content. Hierarchical set of models will be presented, allowing an estimate of soil thermal diffusivity from available data on soil texture, moisture, bulk density and organic carbon. When developing these models the first step was to parameterize the experimental thermal diffusivity vs. moisture dependencies with a 4-parameter function; the next step was to obtain regression formulas to estimate the function parameters from available data on basic soil properties; the last step was to evaluate the accuracy of suggested models using independent data on soil thermal diffusivity. The simplest models were based on soil bulk density and organic carbon data and provided different
Scheibehenne, Benjamin; Pachur, Thorsten
2015-04-01
To be useful, cognitive models with fitted parameters should show generalizability across time and allow accurate predictions of future observations. It has been proposed that hierarchical procedures yield better estimates of model parameters than do nonhierarchical, independent approaches, because the formers' estimates for individuals within a group can mutually inform each other. Here, we examine Bayesian hierarchical approaches to evaluating model generalizability in the context of two prominent models of risky choice-cumulative prospect theory (Tversky & Kahneman, 1992) and the transfer-of-attention-exchange model (Birnbaum & Chavez, 1997). Using empirical data of risky choices collected for each individual at two time points, we compared the use of hierarchical versus independent, nonhierarchical Bayesian estimation techniques to assess two aspects of model generalizability: parameter stability (across time) and predictive accuracy. The relative performance of hierarchical versus independent estimation varied across the different measures of generalizability. The hierarchical approach improved parameter stability (in terms of a lower absolute discrepancy of parameter values across time) and predictive accuracy (in terms of deviance; i.e., likelihood). With respect to test-retest correlations and posterior predictive accuracy, however, the hierarchical approach did not outperform the independent approach. Further analyses suggested that this was due to strong correlations between some parameters within both models. Such intercorrelations make it difficult to identify and interpret single parameters and can induce high degrees of shrinkage in hierarchical models. Similar findings may also occur in the context of other cognitive models of choice.
Lin, Miao-Hsiang; Hsiung, Chao A.
1994-01-01
Two simple empirical approximate Bayes estimators are introduced for estimating domain scores under binomial and hypergeometric distributions respectively. Criteria are established regarding use of these functions over maximum likelihood estimation counterparts. (SLD)
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Huang, Susie Shih-Yin; Strathe, Anders Bjerring; Hung, Silas S O; Boston, Raymond C; Fadel, James G
2012-03-01
The biological function of selenium (Se) is determined by its form and concentration. Selenium is an essential micronutrient for all vertebrates, however, at environmental levels, it is a potent toxin. In the San Francisco Bay-Delta, Se pollution threatens top predatory fish, including white sturgeon. A multi-compartmental Bayesian hierarchical model was developed to estimate the fractional rates of absorption, disposition, and elimination of selenocompounds, in white sturgeon, from tissue measurements obtained in a previous study (Huang et al., 2012). This modeling methodology allows for a population based approach to estimate kinetic physiological parameters in white sturgeon. Briefly, thirty juvenile white sturgeon (five per treatment) were orally intubated with a control (no selenium) or a single dose of Se (500 μg Se/kg body weight) in the form of one inorganic (Selenite) or four organic selenocompounds: selenocystine (SeCys), l-selenomethionine (SeMet), Se-methylseleno-l-cysteine (MSeCys), or selenoyeast (SeYeast). Blood and urine Se were measured at intervals throughout the 48h post intubation period and eight tissues were sampled at 48 h. The model is composed of four state variables, conceptually the gut (Q1), blood (Q2), and tissue (Q3); and urine (Q0), all in units of μg Se. Six kinetics parameters were estimated: the fractional rates [1/h] of absorption, tissue disposition, tissue release, and urinary elimination (k12, k23, k32, and k20), the proportion of the absorbed dose eliminated through the urine (f20), and the distribution blood volume (V; percent body weight, BW). The parameter k12 was higher in sturgeon given the organic Se forms, in the descending order of MSeCys > SeMet > SeCys > Selenite > SeYeast. The parameters k23 and k32 followed similar patterns, and f20 was lowest in fish given MSeCys. Selenium form did not affect k20 or V. The parameter differences observed can be attributed to the different mechanisms of transmucosal transport
Hierarchical parameter estimation of DFIG and drive train system in a wind turbine generator
Pan, Xueping; Ju, Ping; Wu, Feng; Jin, Yuqing
2017-09-01
A new hierarchical parameter estimation method for doubly fed induction generator (DFIG) and drive train system in a wind turbine generator (WTG) is proposed in this paper. Firstly, the parameters of the DFIG and the drive train are estimated locally under different types of disturbances. Secondly, a coordination estimation method is further applied to identify the parameters of the DFIG and the drive train simultaneously with the purpose of attaining the global optimal estimation results. The main benefit of the proposed scheme is the improved estimation accuracy. Estimation results confirm the applicability of the proposed estimation technique.
Bayes Estimation of Shape Parameter of Minimax Distribution under Different Loss Functions
Directory of Open Access Journals (Sweden)
Lanping Li
2015-04-01
Full Text Available The object of this study is to study the Bayes estimation of the unknown shape parameter of Minimax distribution. The prior distribution used here is the non-informative quasi-prior of the parameter. Bayes estimators are derived under squared error loss function and three asymmetric loss functions, which are the LINEX loss, precaution loss and entropy loss functions. Monte Carlo simulations are performed to compare the performances of these Bayes estimates under different situations. Finally, we summarize the result and give the conclusion of this study.
THE SUPERIORITY OF EMPIRICAL BAYES ESTIMATION OF PARAMETERS IN PARTITIONED NORMAL LINEAR MODEL
Institute of Scientific and Technical Information of China (English)
Zhang Weiping; Wei Laisheng
2008-01-01
In this article, the empirical Bayes (EB) estimators are constructed for the estimable functions of the parameters in partitioned normal linear model. The superiorities of the EB estimators over ordinary least-squares (LS) estimator are investigated under mean square error matrix (MSEM) criterion.
An Evaluation of Empirical Bayes's Estimation of Value-Added Teacher Performance Measures
Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul N.; Wooldridge, Jeffrey M.
2015-01-01
Empirical Bayes's (EB) estimation has become a popular procedure used to calculate teacher value added, often as a way to make imprecise estimates more reliable. In this article, we review the theory of EB estimation and use simulated and real student achievement data to study the ability of EB estimators to properly rank teachers. We compare the…
Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul; Wooldridge, Jeffrey M.
2014-01-01
Empirical Bayes' (EB) estimation is a widely used procedure to calculate teacher value-added. It is primarily viewed as a way to make imprecise estimates more reliable. In this paper we review the theory of EB estimation and use simulated data to study its ability to properly rank teachers. We compare the performance of EB estimators with that of…
Ogle, Kiona; Ryan, Edmund; Dijkstra, Feike A.; Pendall, Elise
2016-12-01
Nonsteady state chambers are often employed to measure soil CO2 fluxes. CO2 concentrations (C) in the headspace are sampled at different times (t), and fluxes (f) are calculated from regressions of C versus t based on a limited number of observations. Variability in the data can lead to poor fits and unreliable f estimates; groups with too few observations or poor fits are often discarded, resulting in "missing" f values. We solve these problems by fitting linear (steady state) and nonlinear (nonsteady state, diffusion based) models of C versus t, within a hierarchical Bayesian framework. Data are from the Prairie Heating and CO2 Enrichment study that manipulated atmospheric CO2, temperature, soil moisture, and vegetation. CO2 was collected from static chambers biweekly during five growing seasons, resulting in >12,000 samples and >3100 groups and associated fluxes. We compare f estimates based on nonhierarchical and hierarchical Bayesian (B versus HB) versions of the linear and diffusion-based (L versus D) models, resulting in four different models (BL, BD, HBL, and HBD). Three models fit the data exceptionally well (R2 ≥ 0.98), but the BD model was inferior (R2 = 0.87). The nonhierarchical models (BL and BD) produced highly uncertain f estimates (wide 95% credible intervals), whereas the hierarchical models (HBL and HBD) produced very precise estimates. Of the hierarchical versions, the linear model (HBL) underestimated f by 33% relative to the nonsteady state model (HBD). The hierarchical models offer improvements upon traditional nonhierarchical approaches to estimating f, and we provide example code for the models.
Directory of Open Access Journals (Sweden)
Mohamed Mahmoud Mohamed
2016-09-01
Full Text Available In this paper we develop approximate Bayes estimators of the parameters,reliability, and hazard rate functions of the Logistic distribution by using Lindley’sapproximation, based on progressively type-II censoring samples. Noninformativeprior distributions are used for the parameters. Quadratic, linexand general Entropy loss functions are used. The statistical performances of theBayes estimates relative to quadratic, linex and general entropy loss functionsare compared to those of the maximum likelihood based on simulation study.
Directory of Open Access Journals (Sweden)
Rui Zhang
2014-12-01
Full Text Available This paper presents a hierarchical approach to network construction and time series estimation in persistent scatterer interferometry (PSI for deformation analysis using the time series of high-resolution satellite SAR images. To balance between computational efficiency and solution accuracy, a dividing and conquering algorithm (i.e., two levels of PS networking and solution is proposed for extracting deformation rates of a study area. The algorithm has been tested using 40 high-resolution TerraSAR-X images collected between 2009 and 2010 over Tianjin in China for subsidence analysis, and validated by using the ground-based leveling measurements. The experimental results indicate that the hierarchical approach can remarkably reduce computing time and memory requirements, and the subsidence measurements derived from the hierarchical solution are in good agreement with the leveling data.
A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates
Huang, Weizhang; Kamenski, Lennard; Lang, Jens
2010-03-01
A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.
HIERARCHICAL ADAPTIVE ROOD PATTERN SEARCH FOR MOTION ESTIMATION AT VIDEO SEQUENCE ANALYSIS
Directory of Open Access Journals (Sweden)
V. T. Nguyen
2016-05-01
Full Text Available Subject of Research.The paper deals with the motion estimation algorithms for the analysis of video sequences in compression standards MPEG-4 Visual and H.264. Anew algorithm has been offered based on the analysis of the advantages and disadvantages of existing algorithms. Method. Thealgorithm is called hierarchical adaptive rood pattern search (Hierarchical ARPS, HARPS. This new algorithm includes the classic adaptive rood pattern search ARPS and hierarchical search MP (Hierarchical search or Mean pyramid. All motion estimation algorithms have been implemented using MATLAB package and tested with several video sequences. Main Results. The criteria for evaluating the algorithms were: speed, peak signal to noise ratio, mean square error and mean absolute deviation. The proposed method showed a much better performance at a comparable error and deviation. The peak signal to noise ratio in different video sequences shows better and worse results than characteristics of known algorithms so it requires further investigation. Practical Relevance. Application of this algorithm in MPEG-4 and H.264 codecs instead of the standard can significantly reduce compression time. This feature enables to recommend it in telecommunication systems for multimedia data storing, transmission and processing.
Nutrient load estimates for Manila Bay, Philippines using population data
Sotto, Lara Patricia A; Beusen, Arthur H W; Villanoy, Cesar L.; Bouwman, Lex F.; Jacinto, Gil S.
2015-01-01
A major source of nutrient load to periodically hypoxic Manila Bay is the urban nutrient waste water flow from humans and industries to surface water. In Manila alone, the population density is as high as 19,137 people/km2. A model based on a global point source model by Morée et al. (2013) was used
Nutrient load estimates for Manila Bay, Philippines using population data
Sotto, Lara Patricia A; Beusen, Arthur H W|info:eu-repo/dai/nl/109357302; Villanoy, Cesar L.; Bouwman, Lex F.|info:eu-repo/dai/nl/090428048; Jacinto, Gil S.
2015-01-01
A major source of nutrient load to periodically hypoxic Manila Bay is the urban nutrient waste water flow from humans and industries to surface water. In Manila alone, the population density is as high as 19,137 people/km2. A model based on a global point source model by Morée et al. (2013) was used
Using MCMC chain outputs to efficiently estimate Bayes factors
Morey, Richard D.; Rouder, Jeffrey N.; Pratte, Michael S.; Speckman, Paul L.
2011-01-01
One of the most important methodological problems in psychological research is assessing the reasonableness of null models, which typically constrain a parameter to a specific value such as zero. Bayes factor has been recently advocated in the statistical and psychological literature as a principled
Performance Estimation for Hardware/Software codesign using Hierarchical Colored Petri Nets
DEFF Research Database (Denmark)
Grode, Jesper Nicolai Riis; Madsen, Jan; Jerraya, Ahmed
1998-01-01
estimation tool. This makes the approach very useful for designing component models used for performance estimation in Hardware/Software Codesign frameworks such as the LYCOS system. The paper presents the methodology and rules for designing component models using HCPNs. Two examples of architectural models......This paper presents an approach for abstract modeling of the functional behavior of hardware architectures using Hierarchical Colored Petri Nets (HCPNs). Using HCPNs as architectural models has several advantages such as higher estimation accuracy, higher flexibility, and the need for only one...
NEW METHOD TO ESTIMATE SCALING OF POWER-LAW DEGREE DISTRIBUTION AND HIERARCHICAL NETWORKS
Institute of Scientific and Technical Information of China (English)
YANG Bo; DUAN Wen-qi; CHEN Zhong
2006-01-01
A new method and corresponding numerical procedure are introduced to estimate scaling exponents of power-law degree distribution and hierarchical clustering func tion for complex networks. This method can overcome the biased and inaccurate faults of graphical linear fitting methods commonly used in current network research. Furthermore, it is verified to have higher goodness-of-fit than graphical methods by comparing the KS (Kolmogorov-Smirnov) test statistics for 10 CNN (Connecting Nearest-Neighbor)networks.
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.
Wiecki, Thomas V; Sofer, Imri; Frank, Michael J
2013-01-01
The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python
Directory of Open Access Journals (Sweden)
Thomas V Wiecki
2013-08-01
Full Text Available The diffusion model is a commonly used tool to infer latent psychological processes underlying decision making, and to link them to neural mechanisms based on reaction times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of reaction time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model, which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject / condition than non-hierarchical method, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g. fMRI influence decision making parameters. This paper will first describe the theoretical background of drift-diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the chi-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs
Hierarchical Search Motion Estimation Algorithms for Real-time Video Coding
Institute of Scientific and Technical Information of China (English)
1998-01-01
Data fetching and memory management are two factors as important as computation complexity in Motion Estimation(ME) implementation. In this paper, a new Large-scale Sampling Hierarchical Search motion estimation algorithm(LSHS) is proposed. The LSHS is suitable for real-time video coding with low computational complexity, reduced data fetching and simple memory access. The experiment results indicate the average decoding PSNR with LSHS is only about 0.2dB lower than that with Full Search (FS) scheme.
Valls, Víctor; Cano, Cristina; Bellalta, Boris; Oliver, Miquel
2012-01-01
The paper presents two mechanisms for designing an on-demand, reliable and efficient collection protocol for Wireless Sensor Networks. The former is the Bidirectional Link Quality Estimation, which allows nodes to easily and quickly compute the quality of a link between a pair of nodes. The latter, Hierarchical Range Sectoring, organizes sensors in different sectors based on their location within the network. Based on this organization, nodes from each sector are coordinated to transmit in specific periods of time to reduce the hidden terminal problem. To evaluate these two mechanisms, a protocol called HBCP (Hierarchical-Based Collection Protocol), that implements both mechanisms, has been implemented in TinyOS 2.1, and evaluated in a testbed using TelosB motes. The results show that the HBCP protocol is able to achieve a very high reliability, especially in large networks and in scenarios with bottlenecks.
Pu, Jie; Fang, Di; Wilson, Jeffrey R
2017-02-03
The analysis of correlated binary data is commonly addressed through the use of conditional models with random effects included in the systematic component as opposed to generalized estimating equations (GEE) models that addressed the random component. Since the joint distribution of the observations is usually unknown, the conditional distribution is a natural approach. Our objective was to compare the fit of different binary models for correlated data in Tabaco use. We advocate that the joint modeling of the mean and dispersion may be at times just as adequate. We assessed the ability of these models to account for the intraclass correlation. In so doing, we concentrated on fitting logistic regression models to address smoking behaviors. Frequentist and Bayes' hierarchical models were used to predict conditional probabilities, and the joint modeling (GLM and GAM) models were used to predict marginal probabilities. These models were fitted to National Longitudinal Study of Adolescent to Adult Health (Add Health) data for Tabaco use. We found that people were less likely to smoke if they had higher income, high school or higher education and religious. Individuals were more likely to smoke if they had abused drug or alcohol, spent more time on TV and video games, and been arrested. Moreover, individuals who drank alcohol early in life were more likely to be a regular smoker. Children who experienced mistreatment from their parents were more likely to use Tabaco regularly. The joint modeling of the mean and dispersion models offered a flexible and meaningful method of addressing the intraclass correlation. They do not require one to identify random effects nor distinguish from one level of the hierarchy to the other. Moreover, once one can identify the significant random effects, one can obtain similar results to the random coefficient models. We found that the set of marginal models accounting for extravariation through the additional dispersion submodel produced
Liang, Rong; Zhou, Shu-dong; Li, Li-xia; Zhang, Jun-guo; Gao, Yan-hui
2013-09-01
This paper aims to achieve Bootstraping in hierarchical data and to provide a method for the estimation on confidence interval(CI) of intraclass correlation coefficient(ICC).First, we utilize the mixed-effects model to estimate data from ICC of repeated measurement and from the two-stage sampling. Then, we use Bootstrap method to estimate CI from related ICCs. Finally, the influences of different Bootstraping strategies to ICC's CIs are compared. The repeated measurement instance show that the CI of cluster Bootsraping containing the true ICC value. However, when ignoring the hierarchy characteristics of data, the random Bootsraping method shows that it has the invalid CI. Result from the two-stage instance shows that bias observed between cluster Bootstraping's ICC means while the ICC of the original sample is the smallest, but with wide CI. It is necessary to consider the structure of data as important, when hierarchical data is being resampled. Bootstrapping seems to be better on the higher than that on lower levels.
Kearns, Jack
Empirical Bayes point estimates of true score may be obtained if the distribution of observed score for a fixed examinee is approximated in one of several ways by a well-known compound binomial model. The Bayes estimates of true score may be expressed in terms of the observed score distribution and the distribution of a hypothetical binomial test.…
Estimating the Economic Value of Narwhal and Beluga Hunts in Hudson Bay, Nunavut
Hoover, C.; Bailey, M.L.; Higdon, J.; Ferguson, S.H.; Sumaila, R.
2013-01-01
Hunting of narwhal (Monodon monoceros) and beluga (Delphinapterus leucas) in Hudson Bay is an important activity, providing food and income in northern communities, yet few studies detail the economic aspects of these hunts. We outline the uses of narwhal and beluga and estimate the revenues, costs,
Lowman, L.; Barros, A. P.
2014-12-01
Computational modeling of surface erosion processes is inherently difficult because of the four-dimensional nature of the problem and the multiple temporal and spatial scales that govern individual mechanisms. Landscapes are modified via surface and fluvial erosion and exhumation, each of which takes place over a range of time scales. Traditional field measurements of erosion/exhumation rates are scale dependent, often valid for a single point-wise location or averaging over large aerial extents and periods with intense and mild erosion. We present a method of remotely estimating erosion rates using a Bayesian hierarchical model based upon the stream power erosion law (SPEL). A Bayesian approach allows for estimating erosion rates using the deterministic relationship given by the SPEL and data on channel slopes and precipitation at the basin and sub-basin scale. The spatial scale associated with this framework is the elevation class, where each class is characterized by distinct morphologic behavior observed through different modes in the distribution of basin outlet elevations. Interestingly, the distributions of first-order outlets are similar in shape and extent to the distribution of precipitation events (i.e. individual storms) over a 14-year period between 1998-2011. We demonstrate an application of the Bayesian hierarchical modeling framework for five basins and one intermontane basin located in the central Andes between 5S and 20S. Using remotely sensed data of current annual precipitation rates from the Tropical Rainfall Measuring Mission (TRMM) and topography from a high resolution (3 arc-seconds) digital elevation map (DEM), our erosion rate estimates are consistent with decadal-scale estimates based on landslide mapping and sediment flux observations and 1-2 orders of magnitude larger than most millennial and million year timescale estimates from thermochronology and cosmogenic nuclides.
A hierarchical statistical model for estimating population properties of quantitative genes
Directory of Open Access Journals (Sweden)
Wu Rongling
2002-06-01
Full Text Available Abstract Background Earlier methods for detecting major genes responsible for a quantitative trait rely critically upon a well-structured pedigree in which the segregation pattern of genes exactly follow Mendelian inheritance laws. However, for many outcrossing species, such pedigrees are not available and genes also display population properties. Results In this paper, a hierarchical statistical model is proposed to monitor the existence of a major gene based on its segregation and transmission across two successive generations. The model is implemented with an EM algorithm to provide maximum likelihood estimates for genetic parameters of the major locus. This new method is successfully applied to identify an additive gene having a large effect on stem height growth of aspen trees. The estimates of population genetic parameters for this major gene can be generalized to the original breeding population from which the parents were sampled. A simulation study is presented to evaluate finite sample properties of the model. Conclusions A hierarchical model was derived for detecting major genes affecting a quantitative trait based on progeny tests of outcrossing species. The new model takes into account the population genetic properties of genes and is expected to enhance the accuracy, precision and power of gene detection.
Progressive Bayes: a new framework for nonlinear state estimation
Hanebeck, Uwe D.; Briechle, Kai; Rauh, Andreas
2003-04-01
This paper is concerned with recursively estimating the internal state of a nonlinear dynamic system by processing noisy measurements and the known system input. In the case of continuous states, an exact analytic representation of the probability density characterizing the estimate is generally too complex for recursive estimation or even impossible to obtain. Hence, it is replaced by a convenient type of approximate density characterized by a finite set of parameters. Of course, parameters are desired that systematically minimize a given measure of deviation between the (often unknown) exact density and its approximation, which in general leads to a complicated optimization problem. Here, a new framework for state estimation based on progressive processing is proposed. Rather than trying to solve the original problem, it is exactly converted into a corresponding system of explicit ordinary first-order differential equations. Solving this system over a finite "time" interval yields the desired optimal density parameters.
Improved Estimates of the Milky Way's Disk Scale Length From Hierarchical Bayesian Techniques
Licquia, Timothy C
2016-01-01
The exponential scale length ($L_d$) of the Milky Way's (MW's) disk is a critical parameter for describing the global physical size of our Galaxy, important both for interpreting other Galactic measurements and helping us to understand how our Galaxy fits into extragalactic contexts. Unfortunately, current estimates span a wide range of values and often are statistically incompatible with one another. Here, we aim to determine an improved, aggregate estimate for $L_d$ by utilizing a hierarchical Bayesian (HB) meta-analysis technique that accounts for the possibility that any one measurement has not properly accounted for all statistical or systematic errors. Within this machinery we explore a variety of ways of modeling the nature of problematic measurements, and then use a Bayesian model averaging technique to derive net posterior distributions that incorporate any model-selection uncertainty. Our meta-analysis combines 29 different (15 visible and 14 infrared) photometric measurements of $L_d$ available in ...
A spectral-spatial-dynamic hierarchical Bayesian (SSD-HB) model for estimating soybean yield
Kazama, Yoriko; Kujirai, Toshihiro
2014-10-01
A method called a "spectral-spatial-dynamic hierarchical-Bayesian (SSD-HB) model," which can deal with many parameters (such as spectral and weather information all together) by reducing the occurrence of multicollinearity, is proposed. Experiments conducted on soybean yields in Brazil fields with a RapidEye satellite image indicate that the proposed SSD-HB model can predict soybean yield with a higher degree of accuracy than other estimation methods commonly used in remote-sensing applications. In the case of the SSD-HB model, the mean absolute error between estimated yield of the target area and actual yield is 0.28 t/ha, compared to 0.34 t/ha when conventional PLS regression was applied, showing the potential effectiveness of the proposed model.
Hierarchical information fusion for global displacement estimation in microsensor motion capture.
Meng, Xiaoli; Zhang, Zhi-Qiang; Wu, Jian-Kang; Wong, Wai-Choong
2013-07-01
This paper presents a novel hierarchical information fusion algorithm to obtain human global displacement for different gait patterns, including walking, running, and hopping based on seven body-worn inertial and magnetic measurement units. In the first-level sensor fusion, the orientation for each segment is achieved by a complementary Kalman filter (CKF) which compensates for the orientation error of the inertial navigation system solution through its error state vector. For each foot segment, the displacement is also estimated by the CKF, and zero velocity update is included for the drift reduction in foot displacement estimation. Based on the segment orientations and left/right foot locations, two global displacement estimates can be acquired from left/right lower limb separately using a linked biomechanical model. In the second-level geometric fusion, another Kalman filter is deployed to compensate for the difference between the two estimates from the sensor fusion and get more accurate overall global displacement estimation. The updated global displacement will be transmitted to left/right foot based on the human lower biomechanical model to restrict the drifts in both feet displacements. The experimental results have shown that our proposed method can accurately estimate human locomotion for the three different gait patterns with regard to the optical motion tracker.
Cruz-Marcelo, Alejandro; Ensor, Katherine B.; Rosner, Gary L.
2011-01-01
The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material. PMID:21765566
A hierarchical model for estimating density in camera-trap studies
Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.
2009-01-01
Estimating animal density using capture–recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial capture–recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic capture–recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14·3 animals per 100 km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential ‘holes’ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based ‘captures’ of individual animals.
A hierarchical Bayesian GEV model for improving local and regional flood quantile estimates
Lima, Carlos H. R.; Lall, Upmanu; Troy, Tara; Devineni, Naresh
2016-10-01
We estimate local and regional Generalized Extreme Value (GEV) distribution parameters for flood frequency analysis in a multilevel, hierarchical Bayesian framework, to explicitly model and reduce uncertainties. As prior information for the model, we assume that the GEV location and scale parameters for each site come from independent log-normal distributions, whose mean parameter scales with the drainage area. From empirical and theoretical arguments, the shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the joint posterior distribution. The model is tested using annual maximum series from 20 streamflow gauges located in an 83,000 km2 flood prone basin in Southeast Brazil. The results show a significant reduction of uncertainty estimates of flood quantile estimates over the traditional GEV model, particularly for sites with shorter records. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles tend to be narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering parameter uncertainties and regional information. In order to evaluate the applicability of the proposed hierarchical Bayesian model for regional flood frequency analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling law coefficients are used to define the predictive distributions of the GEV location and scale parameters for the out-of-sample sites given only their drainage areas and the posterior distribution of the
Probabilistic estimation of numbers and costs of future landslides in the San Francisco Bay region
Crovelli, R.A.; Coe, J.A.
2009-01-01
We used historical records of damaging landslides triggered by rainstorms and a newly developed Probabilistic Landslide Assessment Cost Estimation System (PLACES) to estimate the numbers and direct costs of future landslides in the 10-county San Francisco Bay region. Historical records of damaging landslides in the region are incomplete. Therefore, our estimates of numbers and costs of future landslides are minimal estimates. The estimated mean annual number of future damaging landslides for the entire 10-county region is about 65. Santa Cruz County has the highest estimated mean annual number of damaging future landslides (about 18), whereas Napa, San Francisco, and Solano Counties have the lowest estimated mean numbers of damaging landslides (about 1 each). The estimated mean annual cost of future landslides in the entire region is about US $14.80 million (year 2000 $). The estimated mean annual cost is highest for San Mateo County ($3.24 million) and lowest for Solano County ($0.18 million). The annual per capita cost for the entire region will be about $2.10. Santa Cruz County will have the highest annual per capita cost at $8.45, whereas San Francisco County will have the lowest per capita cost at $0.31. Normalising costs by dividing by the percentage of land area with slopes equal to or greater than 17% indicates that San Francisco County will have the highest cost per square km ($7,101), whereas Santa Clara County will have the lowest cost per square km ($229). These results indicate that the San Francisco Bay region has one of the highest levels of landslide risk in the United States. Compared with landslide cost estimates from the rest of the world, the risk level in the Bay region seems high, but not exceptionally high.
Direct and indirect estimates of natural mortality for Chesapeake Bay blue crab
Hewitt, D.A.; Lambert, D.M.; Hoenig, J.M.; Lipcius, R.N.; Bunnell, D.B.; Miller, T.J.
2007-01-01
Analyses of the population dynamics of blue crab Callinectes sapidus have been complicated by a lack of estimates of the instantaneous natural mortality rate (M). We developed the first direct estimates of M for this species by solving Baranov's catch equation for M given estimates of annual survival rate and exploitation rate. Annual survival rates were estimated from a tagging study on adult female blue crabs in Chesapeake Bay, and female-specific exploitation rates for the same stock were estimated by comparing commercial catches with abundances estimated from a dredge survey. We also used eight published methods based on life history parameters to calculate indirect estimates of M for blue crab. Direct estimates of M for adult females in Chesapeake Bay for the years 2002–2004 ranged from 0.42 to 0.87 per year and averaged 0.71 per year. Indirect estimates of M varied considerably depending on life history parameter inputs and the method used. All eight methods yielded values for M between 0.99 and 1.08 per year, and six of the eight methods yielded values between 0.82 and 1.35 per year. Our results indicate that natural mortality of blue crab is higher than previously believed, and we consider M values between 0.7 and 1.1 per year to be reasonable for the exploitable stock in Chesapeake Bay. Remaining uncertainty about Mmakes it necessary to evaluate a range of estimates in assessment models.
Deason, Ellen E.
1982-08-01
Surveys of the distribution, abundance and size of the ctenophore Mnemiopsis leidyi were carried out in Narragansett Bay, R.I. over a 5-year period, 1975-1979. Yearly variations were observed in time of initiation of the ctenophore increase and maximum abundance. Biomass maxima ranged from 0·2 to 3 g dry weight m -3 at Station 2 in lower Narragansett Bay while maximum abundance varied from 20 to 100 animals m -3. Ctenophores less than 1 cm in length generally composed up to 50% of the biomass and 95% of the numerical abundance during the peak of the M. leidyi pulse. During the 1978 maxima and the declining stages of the pulse each year, 100% of the population was composed of small animals. M. leidyi populations increased earlier, reached greater maximum abundances, and were more highly dominated by small animals in the upper bay than toward the mouth of the bay. The averageclearance rate of M. leidyi larvae feeding on A. tonsa at 22°C was 0·36 l mg -1 dry weight day -1, with apparent selection for nauplii relative to copepodites. Predation and excretion rates applied to ctenophore biomass estimated for Narragansett Bay indicated that M. leidyi excretion is minor but predation removed a bay-wide mean of 20% of the zooplankton standing stock daily during August of 1975 and 1976. Variation in M. leidyi predation at Station 2 was inversely related to mean zooplankton biomass during August and September, which increased 4-fold during the 5-year period.
Kuusela, Mikael
2015-01-01
We consider the high energy physics unfolding problem where the goal is to estimate the spectrum of elementary particles given observations distorted by the limited resolution of a particle detector. This important statistical inverse problem arising in data analysis at the Large Hadron Collider at CERN consists in estimating the intensity function of an indirectly observed Poisson point process. Unfolding typically proceeds in two steps: one first produces a regularized point estimate of the unknown intensity and then uses the variability of this estimator to form frequentist confidence intervals that quantify the uncertainty of the solution. In this paper, we propose forming the point estimate using empirical Bayes estimation which enables a data-driven choice of the regularization strength through marginal maximum likelihood estimation. Observing that neither Bayesian credible intervals nor standard bootstrap confidence intervals succeed in achieving good frequentist coverage in this problem due to the inh...
Evaluation of estimation methods for meiofaunal biomass from a meiofaunal survey in Bohai Bay
Institute of Scientific and Technical Information of China (English)
张青田; 王新华; 胡桂坤
2010-01-01
Studies in the coastal area of Bohai Bay,China,from July 2006 to October 2007,suggest that the method of meiofaunal biomass estimation affected the meiofaunal analysis.Conventional estimation methods that use a unique mean individual weight value for nematodes to calculate total biomass may cause deviation of the results.A modified estimation method,named the Subsection Count Method (SCM),was also used to calculate meiofaunal biomass.This entails only a slight increase in workload but generates results of g...
Sánchez Gil, M. Carmen; Berihuete, Angel; Alfaro, Emilio J.; Pérez, Enrique; Sarro, Luis M.
2015-09-01
One of the fundamental goals of modern Astronomy is to estimate the physical parameters of galaxies from images in different spectral bands. We present a hierarchical Bayesian model for obtaining age maps from images in the Ha line (taken with Taurus Tunable Filter (TTF)), ultraviolet band (far UV or FUV, from GALEX) and infrared bands (24, 70 and 160 microns (μm), from Spitzer). As shown in [1], we present the burst ages for young stellar populations in the nearby and nearly face on galaxy M74. As it is shown in the previous work, the Hα to FUV flux ratio gives a good relative indicator of very recent star formation history (SFH). As a nascent star-forming region evolves, the Ha line emission declines earlier than the UV continuum, leading to a decrease in the HαFUV ratio. Through a specific star-forming galaxy model (Starburst 99, SB99), we can obtain the corresponding theoretical ratio Hα / FUV to compare with our observed flux ratios, and thus to estimate the ages of the observed regions. Due to the nature of the problem, it is necessary to propose a model of high complexity to take into account the mean uncertainties, and the interrelationship between parameters when the Hα / FUV flux ratio mentioned above is obtained. To address the complexity of the model, we propose a Bayesian hierarchical model, where a joint probability distribution is defined to determine the parameters (age, metallicity, IMF), from the observed data, in this case the observed flux ratios Hα / FUV. The joint distribution of the parameters is described through an i.i.d. (independent and identically distributed random variables), generated through MCMC (Markov Chain Monte Carlo) techniques.
BayesLine: Bayesian Inference for Spectral Estimation of Gravitational Wave Detector Noise
Littenberg, Tyson B
2014-01-01
Gravitational wave data from ground-based detectors is dominated by instrument noise. Signals will be comparatively weak, and our understanding of the noise will influence detection confidence and signal characterization. Mis-modeled noise can produce large systematic biases in both model selection and parameter estimation. Here we introduce a multi-component, variable dimension, parameterized model to describe the Gaussian-noise power spectrum for data from ground-based gravitational wave interferometers. Called BayesLine, the algorithm models the noise power spectral density using cubic splines for smoothly varying broad-band noise and Lorentzians for narrow-band line features in the spectrum. We describe the algorithm and demonstrate its performance on data from the fifth and sixth LIGO science runs. Once fully integrated into LIGO/Virgo data analysis software, BayesLine will produce accurate spectral estimation and provide a means for marginalizing inferences drawn from the data over all plausible noise s...
Bellan, Steve E; Gimenez, Olivier; Choquet, Rémi; Getz, Wayne M
2013-04-01
Distance sampling is widely used to estimate the abundance or density of wildlife populations. Methods to estimate wildlife mortality rates have developed largely independently from distance sampling, despite the conceptual similarities between estimation of cumulative mortality and the population density of living animals. Conventional distance sampling analyses rely on the assumption that animals are distributed uniformly with respect to transects and thus require randomized placement of transects during survey design. Because mortality events are rare, however, it is often not possible to obtain precise estimates in this way without infeasible levels of effort. A great deal of wildlife data, including mortality data, is available via road-based surveys. Interpreting these data in a distance sampling framework requires accounting for the non-uniformity sampling. Additionally, analyses of opportunistic mortality data must account for the decline in carcass detectability through time. We develop several extensions to distance sampling theory to address these problems.We build mortality estimators in a hierarchical framework that integrates animal movement data, surveillance effort data, and motion-sensor camera trap data, respectively, to relax the uniformity assumption, account for spatiotemporal variation in surveillance effort, and explicitly model carcass detection and disappearance as competing ongoing processes.Analysis of simulated data showed that our estimators were unbiased and that their confidence intervals had good coverage.We also illustrate our approach on opportunistic carcass surveillance data acquired in 2010 during an anthrax outbreak in the plains zebra of Etosha National Park, Namibia.The methods developed here will allow researchers and managers to infer mortality rates from opportunistic surveillance data.
Amplitude estimation of a sine function based on confidence intervals and Bayes' theorem
Eversmann, Dennis; Rosenthal, Marcel
2015-01-01
This paper discusses the amplitude estimation using data originating from a sine-like function as probability density function. If a simple least squares fit is used, a significant bias is observed for small amplitudes. It is shown that a proper treatment using the Feldman-Cousins algorithm of likelihood ratios allows one to construct improved confidence intervals. Using Bayes' theorem a probability density function is derived for the amplitude. It is used in an application to show that it leads to better estimates compared to a simple least squares fit.
Gil, M Carmen Sánchez; Alfaro, Emilio J; Pérez, Enrique; Sarro, Luis M
2015-01-01
One of the fundamental goals of modern Astronomy is to estimate the physical parameters of galaxies from images in different spectral bands. We present a hierarchical Bayesian model for obtaining age maps from images in the \\Ha\\ line (taken with Taurus Tunable Filter (TTF)), ultraviolet band (far UV or FUV, from GALEX) and infrared bands (24, 70 and 160 microns ($\\mu$m), from Spitzer). As shown in S\\'anchez-Gil et al. (2011), we present the burst ages for young stellar populations in the nearby and nearly face on galaxy M74. As it is shown in the previous work, the \\Ha\\ to FUV flux ratio gives a good relative indicator of very recent star formation history (SFH). As a nascent star-forming region evolves, the \\Ha\\ line emission declines earlier than the UV continuum, leading to a decrease in the \\Ha\\/FUV ratio. Through a specific star-forming galaxy model (Starburst 99, SB99), we can obtain the corresponding theoretical ratio \\Ha\\ / FUV to compare with our observed flux ratios, and thus to estimate the ages of...
Hierarchical Bayesian methods for estimation of parameters in a longitudinal HIV dynamic system.
Huang, Yangxin; Liu, Dacheng; Wu, Hulin
2006-06-01
HIV dynamics studies have significantly contributed to the understanding of HIV infection and antiviral treatment strategies. But most studies are limited to short-term viral dynamics due to the difficulty of establishing a relationship of antiviral response with multiple treatment factors such as drug exposure and drug susceptibility during long-term treatment. In this article, a mechanism-based dynamic model is proposed for characterizing long-term viral dynamics with antiretroviral therapy, described by a set of nonlinear differential equations without closed-form solutions. In this model we directly incorporate drug concentration, adherence, and drug susceptibility into a function of treatment efficacy, defined as an inhibition rate of virus replication. We investigate a Bayesian approach under the framework of hierarchical Bayesian (mixed-effects) models for estimating unknown dynamic parameters. In particular, interest focuses on estimating individual dynamic parameters. The proposed methods not only help to alleviate the difficulty in parameter identifiability, but also flexibly deal with sparse and unbalanced longitudinal data from individual subjects. For illustration purposes, we present one simulation example to implement the proposed approach and apply the methodology to a data set from an AIDS clinical trial. The basic concept of the longitudinal HIV dynamic systems and the proposed methodologies are generally applicable to any other biomedical dynamic systems.
A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates
An, Qian; Kang, Jian; Song, Ruiguang; Hall, H. Irene
2016-01-01
Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. PMID:26567891
Institute of Scientific and Technical Information of China (English)
WANG Yaping; CHU Yong Shik; LEE Hee Jun; HAN Choong Keun; OH Byung Chul
2005-01-01
A Nortek acoustic Doppler current profiler (NDP) was installed on a moving vessel to survey the entrance to the Jinhae Bay on August 22～23, 2001. The current velocity and acoustic backscattering signal were collected along two cross-sections; water samples were also collected during the measurement. The acoustic signals were normalized to compensate for the loss incurred by acoustic beam spreading in the seawater. The in situ calibration shows that a significant relationship is present between suspended sediment concentrations (SSC) and normalized acoustic signals. Two acoustic parameters have been determined to construct an acoustic-concentration model.Using this derived model, the SSC patterns along the surveyed cross-sections were obtained by the conversion of acoustic data. Using the current velocity and SSC data, the flux of suspended sediment was estimated. It indicates that the sediment transport into the bay through the entrance has an order of magnitude of 100 t per tidal cycle.
Galili, Tal; Meilijson, Isaac
2016-01-02
The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].
Galili, Tal; Meilijson, Isaac
2016-01-01
The Rao–Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a “better” one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao–Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao–Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.] PMID:27499547
Directory of Open Access Journals (Sweden)
Trevor G. Jones
2015-08-01
Full Text Available Mangroves are found throughout the tropics, providing critical ecosystem goods and services to coastal communities and supporting rich biodiversity. Globally, mangroves are being rapidly degraded and deforested at rates exceeding loss in many tropical inland forests. Madagascar contains around 2% of the global distribution, >20% of which has been deforested since 1990, primarily from over-harvest for forest products and conversion for agriculture and aquaculture. While historically not prominent, mangrove loss in Madagascar’s Mahajamba Bay is increasing. Here, we focus on Mahajamba Bay, presenting long-term dynamics calculated using United States Geological Survey (USGS national-level mangrove maps contextualized with socio-economic research and ground observations, and the results of contemporary (circa 2011 mapping of dominant mangrove types. The analysis of the USGS data indicated 1050 hectares (3.8% lost from 2000 to 2010, which socio-economic research suggests is increasingly driven by commercial timber extraction. Contemporary mapping results permitted stratified sampling based on spectrally distinct and ecologically meaningful mangrove types, allowing for the first-ever vegetation carbon stock estimates for Mahajamba Bay. The overall mean carbon stock across all mangrove classes was estimated to be 100.97 ± 10.49 Mg C ha−1. High stature closed-canopy mangroves had the highest average carbon stock estimate (i.e., 166.82 ± 15.28 Mg C ha−1. These estimates are comparable to other published values in Madagascar and elsewhere in the Western Indian Ocean and demonstrate the ecological variability of Mahajamba Bay’s mangroves and their value towards climate change mitigation.
Directory of Open Access Journals (Sweden)
Hüseyin Oğuz Çoban
2014-05-01
Full Text Available One of the major steps in setting up a bioenergy utilization system is to determine the potential availability of forest biomass. This study illustrates the methodology of estimating the spatial availability of primary forest residues in naturally occurring brutian pine forests, which are considerable components of forest biomass. A spatial database system was created to respectively calculate the theoretical, technical, and spatially economical biomass potentials that were subject to limitation by stand ages, forest functions, site indexes, slopes, and distance zones. To quantify primary forest residues (PFR, the conversion rates were processed, ranging from 24.1% to 26% of allowable cut volume for early thinning, 15 to 20% for thinning, and 11.1% for final felling. The results showed that the total accumulation of theoretical primary forest residues was 86,554.7 green tons in 10 years’ time, 71% of which could be ecologically available. Furthermore, the spatially available biomass potential was 6,095.4 tons per year within a radial distance of 30 km. In the future, the proposed hierarchical process can be applied to brutian pine stands in the Mediterranean region using a larger dataset that will provide a truer representation of the regional variation.
Using variance components to estimate power in a hierarchically nested sampling design.
Dzul, Maria C; Dixon, Philip M; Quist, Michael C; Dinsmore, Stephen J; Bower, Michael R; Wilson, Kevin P; Gaines, D Bailey
2013-01-01
We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007-2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.
Rodhouse, T.J.; Irvine, K.M.; Vierling, K.T.; Vierling, L.A.
2011-01-01
Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed Bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas]) population following the cessation of grazing and mowing on the study area, and in a separate reference population of camas. The restoration site was bisected by roads and drainage ditches, resulting in distinct subpopulations ("zones") with different land use histories. We modeled this spatial structure by fitting zone-specific intercepts and slopes. We allowed spatial covariance parameters in the model to vary by zone, as in stratified kriging, accommodating anisotropy and improving computation and biological interpretation. Trend estimates provided evidence of a positive effect of passive restoration, and the strength of evidence was influenced by the amount of spatial structure in the model. Allowing trends to vary among zones and accounting for topographic heterogeneity increased precision of trend estimates. Accounting for spatial autocorrelation shifted parameter coefficients in ways that varied among zones depending on strength of statistical shrinkage, autocorrelation and topographic heterogeneity-a phenomenon not widely described. Spatially explicit estimates of trend from hierarchical models will generally be more useful to land managers than pooled regional estimates and provide more realistic assessments of uncertainty. The ability to grapple with historical contingency is an appealing benefit of this approach.
Sundara, Vinny Yuliani; Sadik, Kusman; Kurnia, Anang
2017-03-01
Survey is one of data collection method which sampling of individual units from a population. However, national survey only provides limited information which impacts on low precision in small area level. In fact, when the area is not selected as sample unit, estimation cannot be made. Therefore, small area estimation method is required to solve this problem. One of model-based estimation methods is empirical Bayes which has been widely used to estimate parameter in small area, even in non-sampled area. Yet, problems occur when this method is used to estimate parameter of non-sampled area which is solely based on synthetic model which ignore the area effects. This paper proposed an approach to cluster area effects of auxiliary variable by assuming that there are similar among particular area. Direct estimates in several sub-districts in regency and city of Bogor are zero because no household which are under poverty in the sample that selected from these sub-districts. Empirical Bayes method is used to get the estimates are not zero. Empirical Bayes method on FGT poverty measures both Molina & Rao and information clusters have the same estimates in the sub-districts selected as samples, but have different estimates on non-sampled sub-districts. Empirical Bayes methods with information cluster has smaller coefficient of variation. Empirical Bayes method with cluster information is better than empirical Bayes methods without cluster information on non-sampled sub-districts in regency and city of Bogor in terms of coefficient of variation.
Directory of Open Access Journals (Sweden)
Rajesh Singh
2016-06-01
Full Text Available In this paper, the failure intensity has been characterized by one parameter length biased exponential class Software Reliability Growth Model (SRGM considering the Poisson process of occurrence of software failures. This proposed length biased exponential class model is a function of parameters namely; total number of failures θ0 and scale parameter θ1. It is assumed that very little or no information is available about both these parameters. The Bayes estimators for parameters θ0 and θ1 have been obtained using non-informative priors for each parameter under square error loss function. The Monte Carlo simulation technique is used to study the performance of proposed Bayes estimators against their corresponding maximum likelihood estimators on the basis of risk efficiencies. It is concluded that both the proposed Bayes estimators of total number of failures and scale parameter perform well for proper choice of execution time.
Hanike, Yusrianti; Sadik, Kusman; Kurnia, Anang
2016-02-01
This research implemented unemployment rate in Indonesia that based on Poisson distribution. It would be estimated by modified the post-stratification and Small Area Estimation (SAE) model. Post-stratification was one of technique sampling that stratified after collected survey data. It's used when the survey data didn't serve for estimating the interest area. Interest area here was the education of unemployment which separated in seven category. The data was obtained by Labour Employment National survey (Sakernas) that's collected by company survey in Indonesia, BPS, Statistic Indonesia. This company served the national survey that gave too small sample for level district. Model of SAE was one of alternative to solved it. According the problem above, we combined this post-stratification sampling and SAE model. This research gave two main model of post-stratification sampling. Model I defined the category of education was the dummy variable and model II defined the category of education was the area random effect. Two model has problem wasn't complied by Poisson assumption. Using Poisson-Gamma model, model I has over dispersion problem was 1.23 solved to 0.91 chi square/df and model II has under dispersion problem was 0.35 solved to 0.94 chi square/df. Empirical Bayes was applied to estimate the proportion of every category education of unemployment. Using Bayesian Information Criteria (BIC), Model I has smaller mean square error (MSE) than model II.
Hierarchical Bayesian sparse image reconstruction with application to MRFM
Dobigeon, Nicolas; Tourneret, Jean-Yves
2008-01-01
This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g. by maximizing the estimated posterior distribution. In our fully Bayesian approach the posteriors of all the parameters are available. Thus our algorithm provides more information than other previously proposed sparse reconstr...
Estimation of historic flows and sediment loads to San Francisco Bay,1849–2011
Moftakhari, H.R.; Jay, D.A.; Talke, S.A.; Schoellhamer, David H.
2016-01-01
River flow and sediment transport in estuaries influence morphological development over decadal and century time scales, but hydrological and sedimentological records are typically too short to adequately characterize long-term trends. In this study, we recover archival records and apply a rating curve approach to develop the first instrumental estimates of daily delta inflow and sediment loads to San Francisco Bay (1849–1929). The total sediment load is constrained using sedimentation/erosion estimated from bathymetric survey data to produce continuous daily sediment transport estimates from 1849 to 1955, the time period prior to sediment load measurements. We estimate that ∼55% (45–75%) of the ∼1500 ± 400 million tons (Mt) of sediment delivered to the estuary between 1849 and 2011 was the result of anthropogenic alteration in the watershed that increased sediment supply. Also, the seasonal timing of sediment flux events has shifted because significant spring-melt floods have decreased, causing estimated springtime transport (April 1st to June 30th) to decrease from ∼25% to ∼15% of the annual total. By contrast, wintertime sediment loads (December 1st to March 31st) have increased from ∼70% to ∼80%. A ∼35% reduction of annual flow since the 19th century along with decreased sediment supply has resulted in a ∼50% reduction in annual sediment delivery. The methods developed in this study can be applied to other systems for which unanalyzed historic data exist.
Directory of Open Access Journals (Sweden)
Nengjun Yi
2011-12-01
Full Text Available Complex diseases and traits are likely influenced by many common and rare genetic variants and environmental factors. Detecting disease susceptibility variants is a challenging task, especially when their frequencies are low and/or their effects are small or moderate. We propose here a comprehensive hierarchical generalized linear model framework for simultaneously analyzing multiple groups of rare and common variants and relevant covariates. The proposed hierarchical generalized linear models introduce a group effect and a genetic score (i.e., a linear combination of main-effect predictors for genetic variants for each group of variants, and jointly they estimate the group effects and the weights of the genetic scores. This framework includes various previous methods as special cases, and it can effectively deal with both risk and protective variants in a group and can simultaneously estimate the cumulative contribution of multiple variants and their relative importance. Our computational strategy is based on extending the standard procedure for fitting generalized linear models in the statistical software R to the proposed hierarchical models, leading to the development of stable and flexible tools. The methods are illustrated with sequence data in gene ANGPTL4 from the Dallas Heart Study. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/.
Yi, Nengjun; Liu, Nianjun; Zhi, Degui; Li, Jun
2011-01-01
Complex diseases and traits are likely influenced by many common and rare genetic variants and environmental factors. Detecting disease susceptibility variants is a challenging task, especially when their frequencies are low and/or their effects are small or moderate. We propose here a comprehensive hierarchical generalized linear model framework for simultaneously analyzing multiple groups of rare and common variants and relevant covariates. The proposed hierarchical generalized linear models introduce a group effect and a genetic score (i.e., a linear combination of main-effect predictors for genetic variants) for each group of variants, and jointly they estimate the group effects and the weights of the genetic scores. This framework includes various previous methods as special cases, and it can effectively deal with both risk and protective variants in a group and can simultaneously estimate the cumulative contribution of multiple variants and their relative importance. Our computational strategy is based on extending the standard procedure for fitting generalized linear models in the statistical software R to the proposed hierarchical models, leading to the development of stable and flexible tools. The methods are illustrated with sequence data in gene ANGPTL4 from the Dallas Heart Study. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). PMID:22144906
Directory of Open Access Journals (Sweden)
Andria Ansri Utama
2015-06-01
Full Text Available The Jakarta Bay is known as a fishing ground area for several traditional types of fishing gears. The fishery has important role to provide nutrition, sustainable livelihoods, and poverty alleviation around the area. Abundance estimation of commercial fish species in the Jakarta Bay is essential particularly comparable of series data in order to evaluate the potential changes in distribution and abundance. The purpose of this study is analyzing the distribution of commercial fish species in the Jakarta Bay and estimate their abundance and biomass. Fish assemblages were concentrated in the eastern and central part of bay. Apparently salinity and DO associated with rich density of phytoplankton and zooplankton may explain the spatial variability of short-bodied mackerel and pony fish, while assemblages pattern of spiny hairtail and croaker might be driven by the availability of small planktivorous fish as their diet. The most abundant commercial fish in the Jakarta Bay are Short-bodied mackerel (Rastrelliger brachysoma, Ponyfish (Leiognathus sp., Croaker (Johnius sp. dan Spiny hairtail (Lepturacanthus savala respectively. Furthermore, biomass estimates for those species showed short-bodied mackerel has the highest biomass followed by spiny hairtail, croaker, and ponyfish.
Pose Estimation using a Hierarchical 3D Representation of Contours and Surfaces
DEFF Research Database (Denmark)
Buch, Anders Glent; Kraft, Dirk; Kämäräinen, Joni-Kristian
2013-01-01
We present a system for detecting the pose of rigid objects using texture and contour information. From a stereo image view of a scene, a sparse hierarchical scene representation is reconstructed using an early cognitive vision system. We define an object model in terms of a simple context descri...
Hierarchical approaches to estimate energy expenditure using phone-based accelerometers.
Vathsangam, Harshvardhan; Schroeder, E Todd; Sukhatme, Gaurav S
2014-07-01
Physical inactivity is linked with increase in risk of cancer, heart disease, stroke, and diabetes. Walking is an easily available activity to reduce sedentary time. Objective methods to accurately assess energy expenditure from walking that is normalized to an individual would allow tailored interventions. Current techniques rely on normalization by weight scaling or fitting a polynomial function of weight and speed. Using the example of steady-state treadmill walking, we present a set of algorithms that extend previous work to include an arbitrary number of anthropometric descriptors. We specifically focus on predicting energy expenditure using movement measured by mobile phone-based accelerometers. The models tested include nearest neighbor models, weight-scaled models, a set of hierarchical linear models, multivariate models, and speed-based approaches. These are compared for prediction accuracy as measured by normalized average root mean-squared error across all participants. Nearest neighbor models showed highest errors. Feature combinations corresponding to sedentary energy expenditure, sedentary heart rate, and sex alone resulted in errors that were higher than speed-based models and nearest-neighbor models. Size-based features such as BMI, weight, and height produced lower errors. Hierarchical models performed better than multivariate models when size-based features were used. We used the hierarchical linear model to determine the best individual feature to describe a person. Weight was the best individual descriptor followed by height. We also test models for their ability to predict energy expenditure with limited training data. Hierarchical models outperformed personal models when a low amount of training data were available. Speed-based models showed poor interpolation capability, whereas hierarchical models showed uniform interpolation capabilities across speeds.
Parameter Estimation for Gravitational-wave Bursts with the BayesWave Pipeline
Bécsy, Bence; Raffai, Peter; Cornish, Neil J.; Essick, Reed; Kanner, Jonah; Katsavounidis, Erik; Littenberg, Tyson B.; Millhouse, Margaret; Vitale, Salvatore
2017-04-01
We provide a comprehensive multi-aspect study of the performance of a pipeline used by the LIGO-Virgo Collaboration for estimating parameters of gravitational-wave bursts. We add simulated signals with four different morphologies (sine-Gaussians (SGs), Gaussians, white-noise bursts, and binary black hole signals) to simulated noise samples representing noise of the two Advanced LIGO detectors during their first observing run. We recover them with the BayesWave (BW) pipeline to study its accuracy in sky localization, waveform reconstruction, and estimation of model-independent waveform parameters. BW localizes sources with a level of accuracy comparable for all four morphologies, with the median separation of actual and estimated sky locations ranging from 25.°1 to 30.°3. This is a reasonable accuracy in the two-detector case, and is comparable to accuracies of other localization methods studied previously. As BW reconstructs generic transient signals with SG wavelets, it is unsurprising that BW performs best in reconstructing SG and Gaussian waveforms. The BW accuracy in waveform reconstruction increases steeply with the network signal-to-noise ratio (S/N{}{net}), reaching a 85% and 95% match between the reconstructed and actual waveform below S/N{}{net}≈ 20 and S/N{}{net}≈ 50, respectively, for all morphologies. The BW accuracy in estimating central moments of waveforms is only limited by statistical errors in the frequency domain, and is also affected by systematic errors in the time domain as BW cannot reconstruct low-amplitude parts of signals that are overwhelmed by noise. The figures of merit we introduce can be used in future characterizations of parameter estimation pipelines.
Joseph, Bachman L.; Phillips, P.J.
1996-01-01
Base-flow samples were collected from 47 sampling sites for four seasons from 1990-91 on the Delmarva Peninsula in Delaware and Maryland to relate stream chemistry to a "hydrologic landscape" and season. Two hydrologic landscapes were determined: (1) a well-drained landscape, characterized by a combination of a low percentage of forest cover, a low percentage of poorly drained soil, and elevated channel slope; and (2) poorly drained landscape, characterized by a combination of an elevated percentage of forest cover, an elevated percentage of poorly drained soil, and low channel slope. Concentrations of nitrogen were significantly related to the hydrologic landscape. Nitrogen concentrations tended to be higher in well-drained landscapes than in poorly drained ones. The highest instantaneous nitrogen yields occurred in well-drained landscapes during the winter. These yields were extrapolated over the part of the study area draining to Chesapeake Bay in order to provide a rough estimate of nitrogen load from base flow to the Bay and its estuarine tributaries. This estimate was compared to an estimate made by extrapolating from an existing long-term monitoring station. The load estimate from the stream survey data was 5 ?? 106 kg of N per year, which was about four times the estimate made from the existing long-term monitoring station. The stream-survey estimate of base flow represents about 40 percent of the total nitrogen load that enters the Bay and estuarine tributaries from all sources in the study area.
Importance of shrinkage in empirical bayes estimates for diagnostics: problems and solutions.
Savic, Radojka M; Karlsson, Mats O
2009-09-01
Empirical Bayes ("post hoc") estimates (EBEs) of etas provide modelers with diagnostics: the EBEs themselves, individual prediction (IPRED), and residual errors (individual weighted residual (IWRES)). When data are uninformative at the individual level, the EBE distribution will shrink towards zero (eta-shrinkage, quantified as 1-SD(eta (EBE))/omega), IPREDs towards the corresponding observations, and IWRES towards zero (epsilon-shrinkage, quantified as 1-SD(IWRES)). These diagnostics are widely used in pharmacokinetic (PK) pharmacodynamic (PD) modeling; we investigate here their usefulness in the presence of shrinkage. Datasets were simulated from a range of PK PD models, EBEs estimated in non-linear mixed effects modeling based on the true or a misspecified model, and desired diagnostics evaluated both qualitatively and quantitatively. Identified consequences of eta-shrinkage on EBE-based model diagnostics include non-normal and/or asymmetric distribution of EBEs with their mean values ("ETABAR") significantly different from zero, even for a correctly specified model; EBE-EBE correlations and covariate relationships may be masked, falsely induced, or the shape of the true relationship distorted. Consequences of epsilon-shrinkage included low power of IPRED and IWRES to diagnose structural and residual error model misspecification, respectively. EBE-based diagnostics should be interpreted with caution whenever substantial eta- or epsilon-shrinkage exists (usually greater than 20% to 30%). Reporting the magnitude of eta- and epsilon-shrinkage will facilitate the informed use and interpretation of EBE-based diagnostics.
Pose Estimation using a Hierarchical 3D Representation of Contours and Surfaces
DEFF Research Database (Denmark)
Buch, Anders Glent; Kraft, Dirk; Kämäräinen, Joni-Kristian
2013-01-01
We present a system for detecting the pose of rigid objects using texture and contour information. From a stereo image view of a scene, a sparse hierarchical scene representation is reconstructed using an early cognitive vision system. We define an object model in terms of a simple context...... descriptor of the contour and texture features to provide a sparse, yet descriptive object representation. Using our descriptors, we do a search in the correspondence space to perform outlier removal and compute the object pose. We perform an extensive evaluation of our approach with stereo images...
Geodetic estimates of fault slip rates in the San Francisco Bay area
Savage, J. C.; Svarc, J. L.; Prescott, W. H.
1999-03-01
Bourne et al. [1998] have suggested that the interseismic velocity profile at the surface across a transform plate boundary is a replica of the secular velocity profile at depth in the plastosphere. On the other hand, in the viscoelastic coupling model the shape of the interseismic surface velocity profile is a consequence of plastosphere relaxation following the previous rupture of the faults that make up the plate boundary and is not directly related to the secular flow in the plastosphere. The two models appear to be incompatible. If the plate boundary is composed of several subparallel faults and the interseismic surface velocity profile across the boundary known, each model predicts the secular slip rates on the faults which make up the boundary. As suggested by Bourne et al., the models can then be tested by comparing the predicted secular slip rates to those estimated from long-term offsets inferred from geology. Here we apply that test to the secular slip rates predicted for the principal faults (San Andreas, San Gregorio, Hayward, Calaveras, Rodgers Creek, Green Valley and Greenville faults) in the San Andreas fault system in the San Francisco Bay area. The estimates from the two models generally agree with one another and to a lesser extent with the geologic estimate. Because the viscoelastic coupling model has been equally successful in estimating secular slip rates on the various fault strands at a diffuse plate boundary, the success of the model of Bourne et at. [1998] in doing the same thing should not be taken as proof that the interseismic velocity profile across the plate boundary at the surface is a replica of the velocity profile at depth in the plastosphere.
线性指数分布参数的Bayes估计%The Bayes Estimation of Parameter for Linear Exponential Distribution
Institute of Scientific and Technical Information of China (English)
谭玲; 李金玉
2011-01-01
对给定容量为n的线性指数分布样本X1,X2,…,Xn,在Linex损失函数下,利用共轭先验分布讨论线性指数分布参数θ的Bayes估计,多层Bayes估计,E-Bayes估计和极大似然估计.%In this paper,the linear exponential distribution given the sample size n is in linex loss function,the use of binomial conjugate prior distribution parameters discusse Bayes estimation,multi-layered Bayes estimation,E-Bayes estimation and the maximum likelihood estimation.
Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun
2017-08-01
Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM2.5 is a promising way to fill the areas that are not covered by ground PM2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R(2) = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM2.5 estimates.
Bayes-Optimal Joint Channel-and-Data Estimation for Massive MIMO With Low-Precision ADCs
Wen, Chao-Kai; Wang, Chang-Jen; Jin, Shi; Wong, Kai-Kit; Ting, Pangan
2016-05-01
This paper considers a multiple-input multiple-output (MIMO) receiver with very low-precision analog-to-digital convertors (ADCs) with the goal of developing massive MIMO antenna systems that require minimal cost and power. Previous studies demonstrated that the training duration should be {\\em relatively long} to obtain acceptable channel state information. To address this requirement, we adopt a joint channel-and-data (JCD) estimation method based on Bayes-optimal inference. This method yields minimal mean square errors with respect to the channels and payload data. We develop a Bayes-optimal JCD estimator using a recent technique based on approximate message passing. We then present an analytical framework to study the theoretical performance of the estimator in the large-system limit. Simulation results confirm our analytical results, which allow the efficient evaluation of the performance of quantized massive MIMO systems and provide insights into effective system design.
A tutorial on Bayes factor estimation with the product space method
Lodewyckx, T.; Kim, W.; Lee, M.D.; Tuerlinckx, F.; Kuppens, P.; Wagenmakers, E.-J.
2011-01-01
The Bayes factor is an intuitive and principled model selection tool from Bayesian statistics. The Bayes factor quantifies the relative likelihood of the observed data under two competing models, and as such, it measures the evidence that the data provides for one model versus the other. Unfortunate
Bayesian methods for estimating the reliability in complex hierarchical networks (interim report).
Energy Technology Data Exchange (ETDEWEB)
Marzouk, Youssef M.; Zurn, Rena M.; Boggs, Paul T.; Diegert, Kathleen V. (Sandia National Laboratories, Albuquerque, NM); Red-Horse, John Robert (Sandia National Laboratories, Albuquerque, NM); Pebay, Philippe Pierre
2007-05-01
Current work on the Integrated Stockpile Evaluation (ISE) project is evidence of Sandia's commitment to maintaining the integrity of the nuclear weapons stockpile. In this report, we undertake a key element in that process: development of an analytical framework for determining the reliability of the stockpile in a realistic environment of time-variance, inherent uncertainty, and sparse available information. This framework is probabilistic in nature and is founded on a novel combination of classical and computational Bayesian analysis, Bayesian networks, and polynomial chaos expansions. We note that, while the focus of the effort is stockpile-related, it is applicable to any reasonably-structured hierarchical system, including systems with feedback.
Seck, A.; Welty, C.
2012-12-01
Characterization of subsurface hydrogeologic properties in three dimensions and at large scales for use in groundwater flow models can remain a challenge owing to the lack of regional data sets and scatter in coverage, type, and format of existing small-scale data sets. This is the case for the Chesapeake Bay watershed, where numerous studies have been carried out to quantify groundwater processes at small scales but limited information is available on subsurface characteristics and groundwater fluxes at regional scales. One goal of this work is to synthesize disparate information on subsurface properties for the Chesapeake Bay watershed for use in a 3D integrated ParFlow model over an area of 400,000 km2 with a horizontal resolution of 1 km and a vertical resolution of 5 m. We combined different types of data at various scales to characterize hydrostratigraphy and hydrogeological properties. The conceptual hydrogeologic model of the study area is composed of two major regions. One region extends from the Valley and Ridge physiographic province south of New York to the Piedmont physiographic province in Maryland and Virginia. This region is generally characterized by fractured rock overlain by a mantle of regolith. Soil thickness and hydraulic conductivity values were obtained from the U.S. General Soil Map (STATSGO2). Saprolite thickness was evaluated using casing depth information from well completion reports from four state agencies. Geostatistical methods were used to generalize point data to the model extent and resolution. A three-dimensional hydraulic conductivity field for fractured bedrock was estimated using a published national map of permeability and depth- varying functions from literature. The Coastal Plain of Maryland, Virginia, Delaware and New Jersey constitutes the second region and is characterized by layered sediments. In this region, the geometry of 20 aquifers and confining units was constructed using interpolation of published contour maps of
2013-01-01
Distance sampling is widely used to estimate the abundance or density of wildlife populations. Methods to estimate wildlife mortality rates have developed largely independently from distance sampling, despite the conceptual similarities between estimation of cumulative mortality and the population density of living animals. Conventional distance sampling analyses rely on the assumption that animals are distributed uniformly with respect to transects and thus require randomized placement of tr...
Bayesian hierarchical modeling of drug stability data.
Chen, Jie; Zhong, Jinglin; Nie, Lei
2008-06-15
Stability data are commonly analyzed using linear fixed or random effect model. The linear fixed effect model does not take into account the batch-to-batch variation, whereas the random effect model may suffer from the unreliable shelf-life estimates due to small sample size. Moreover, both methods do not utilize any prior information that might have been available. In this article, we propose a Bayesian hierarchical approach to modeling drug stability data. Under this hierarchical structure, we first use Bayes factor to test the poolability of batches. Given the decision on poolability of batches, we then estimate the shelf-life that applies to all batches. The approach is illustrated with two example data sets and its performance is compared in simulation studies with that of the commonly used frequentist methods. (c) 2008 John Wiley & Sons, Ltd.
Budde, Kristin S; Barron, Daniel S; Fox, Peter T
2014-12-01
Developmental stuttering is a speech disorder most likely due to a heritable form of developmental dysmyelination impairing the function of the speech-motor system. Speech-induced brain-activation patterns in persons who stutter (PWS) are anomalous in various ways; the consistency of these aberrant patterns is a matter of ongoing debate. Here, we present a hierarchical series of coordinate-based meta-analyses addressing this issue. Two tiers of meta-analyses were performed on a 17-paper dataset (202 PWS; 167 fluent controls). Four large-scale (top-tier) meta-analyses were performed, two for each subject group (PWS and controls). These analyses robustly confirmed the regional effects previously postulated as "neural signatures of stuttering" (Brown, Ingham, Ingham, Laird, & Fox, 2005) and extended this designation to additional regions. Two smaller-scale (lower-tier) meta-analyses refined the interpretation of the large-scale analyses: (1) a between-group contrast targeting differences between PWS and controls (stuttering trait); and (2) a within-group contrast (PWS only) of stuttering with induced fluency (stuttering state).
Badshah, Amir; Choudhry, Aadil Jaleel; Ullah, Shan
2017-03-01
Industries are moving towards automation in order to increase productivity and ensure quality. Variety of electronic and electromagnetic systems are being employed to assist human operator in fast and accurate quality inspection of products. Majority of these systems are equipped with cameras and rely on diverse image processing algorithms. Information is lost in 2D image, therefore acquiring accurate 3D data from 2D images is an open issue. FAST, SURF and SIFT are well-known spatial domain techniques for features extraction and henceforth image registration to find correspondence between images. The efficiency of these methods is measured in terms of the number of perfect matches found. A novel fast and robust technique for stereo-image processing is proposed. It is based on non-rigid registration using modified normalized phase correlation. The proposed method registers two images in hierarchical fashion using quad-tree structure. The registration process works through global to local level resulting in robust matches even in presence of blur and noise. The computed matches can further be utilized to determine disparity and depth for industrial product inspection. The same can be used in driver assistance systems. The preliminary tests on Middlebury dataset produced satisfactory results. The execution time for a 413 x 370 stereo-pair is 500ms approximately on a low cost DSP.
Estimates of vertical velocities and eddy coefficients in the Bay of Bengal
Digital Repository Service at National Institute of Oceanography (India)
Varkey, M.J.; Sastry, J.S.
Vertical velocities and eddy coefficients in the intermediate depths of the Bay of Bengal are calculated from mean hydrographic data for 300 miles-squares. The linear current density (sigma- O) versus log-depth plots show steady balance between...
Estimating vegetation coverage in St. Joseph Bay, Florida with an airborne multispectral scanner
Savastano, K. J.; Faller, K. H.; Iverson, R. L.
1984-01-01
A four-channel multispectral scanner (MSS) carried aboard an aircraft was used to collect data along several flight paths over St. Joseph Bay, FL. Various classifications of benthic features were defined from the results of ground-truth observations. The classes were statistically correlated with MSS channel signal intensity using multivariate methods. Application of the classification measures to the MSS data set allowed computer construction of a detailed map of benthic features of the bay. Various densities of segrasses, various bottom types, and algal coverage were distinguished from water of various depths. The areal vegetation coverage of St. Joseph Bay was not significantly different from the results of a survey conducted six years previously, suggesting that seagrasses are a very stable feature of the bay bottom.
Digital Repository Service at National Institute of Oceanography (India)
Sadhuram, Y.; Murthy, T.V.R.; Somayajulu, Y.K.
In the present study, cyclone heat potential (CHP) in the Bay of Bengal has been estimated for different seasons using Levitus climatology. A good association was noticed between CHP and the efficiency of intensification (i.e the ratio between...
Laplanche, Christophe
2010-04-01
The author compares 12 hierarchical models in the aim of estimating the abundance of fish in alpine streams by using removal sampling data collected at multiple locations. The most expanded model accounts for (i) variability of the abundance among locations, (ii) variability of the catchability among locations, and (iii) residual variability of the catchability among fish. Eleven model reductions are considered depending which variability is included in the model. The more restrictive model considers none of the aforementioned variabilities. Computations of the latter model can be achieved by using the algorithm presented by Carle and Strub (Biometrics 1978, 34, 621-630). Maximum a posteriori and interval estimates of the parameters as well as the Akaike and the Bayesian information criterions of model fit are computed by using samples simulated by a Markov chain Monte Carlo method. The models are compared by using a trout (Salmo trutta fario) parr (0+) removal sampling data set collected at three locations in the Pyrénées mountain range (Haute-Garonne, France) in July 2006. Results suggest that, in this case study, variability of the catchability is not significant, either among fish or locations. Variability of the abundance among locations is significant. 95% interval estimates of the abundances at the three locations are [0.15, 0.24], [0.26, 0.36], and [0.45, 0.58] parrs per m(2). Such differences are likely the consequence of habitat variability.
Kim, J.; Kwon, H. H.
2014-12-01
The existing regional frequency analysis has disadvantages in that it is difficult to consider geographical characteristics in estimating areal rainfall. In this regard, This study aims to develop a hierarchical Bayesian model based regional frequency analysis in that spatial patterns of the design rainfall with geographical information are explicitly incorporated. This study assumes that the parameters of Gumbel distribution are a function of geographical characteristics (e.g. altitude, latitude and longitude) within a general linear regression framework. Posterior distributions of the regression parameters are estimated by Bayesian Markov Chain Monte Calro (MCMC) method, and the identified functional relationship is used to spatially interpolate the parameters of the Gumbel distribution by using digital elevation models (DEM) as inputs. The proposed model is applied to derive design rainfalls over the entire Han-river watershed. It was found that the proposed Bayesian regional frequency analysis model showed similar results compared to L-moment based regional frequency analysis. In addition, the model showed an advantage in terms of quantifying uncertainty of the design rainfall and estimating the area rainfall considering geographical information. Acknowledgement: This research was supported by a grant (14AWMP-B079364-01) from Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
Licquia, Timothy C
2014-01-01
We present improved estimates of several global properties of the Milky Way, including its current star formation rate (SFR), the stellar mass contained in its disk and bulge+bar components, as well as its total stellar mass. We do so by combining previous measurements from the literature using a hierarchical Bayesian (HB) statistical method that allows us to account for the possibility that any value may be incorrect or have underestimated errors. We show that this method is robust to a wide variety of assumptions about the nature of problems in individual measurements or error estimates. Ultimately, our analysis yields a SFR for the Galaxy of $\\dot{\\rm M}_\\star=1.65\\pm0.19$ ${\\rm M}_\\odot$ yr$^{-1}$. By combining HB methods with Monte Carlo simulations that incorporate the latest estimates of the Galactocentric radius of the Sun, $R_0$, the exponential scale-length of the disk, $L_d$, and the local surface density of stellar mass, $\\Sigma_\\star(R_0)$, we show that the mass of the Galactic bulge+bar is ${\\rm...
Crovelli, Robert A.; Coe, Jeffrey A.
2008-01-01
The Probabilistic Landslide Assessment Cost Estimation System (PLACES) presented in this report estimates the number and economic loss (cost) of landslides during a specified future time in individual areas, and then calculates the sum of those estimates. The analytic probabilistic methodology is based upon conditional probability theory and laws of expectation and variance. The probabilistic methodology is expressed in the form of a Microsoft Excel computer spreadsheet program. Using historical records, the PLACES spreadsheet is used to estimate the number of future damaging landslides and total damage, as economic loss, from future landslides caused by rainstorms in 10 counties of the San Francisco Bay region in California. Estimates are made for any future 5-year period of time. The estimated total number of future damaging landslides for the entire 10-county region during any future 5-year period of time is about 330. Santa Cruz County has the highest estimated number of damaging landslides (about 90), whereas Napa, San Francisco, and Solano Counties have the lowest estimated number of damaging landslides (5?6 each). Estimated direct costs from future damaging landslides for the entire 10-county region for any future 5-year period are about US $76 million (year 2000 dollars). San Mateo County has the highest estimated costs ($16.62 million), and Solano County has the lowest estimated costs (about $0.90 million). Estimated direct costs are also subdivided into public and private costs.
Durner, Wolfgang; Schelle, Henrike; Schlüter, Steffen; Vogel, Hans-Jörg; Ippisch, Olaf; Vanderborght, Jan
2013-04-01
Soils are structured on multiple spatial scales, originating from inhomogeneities of the parent material, pedogenesis, soil organisms, plant roots, or tillage. This leads to heterogeneities that affect the local hydraulic properties and thus govern the flow behavior of water in soil. To assess the impact of individual or combined structural components on the water dynamics within a soil, complex 2D and 3D virtual realities, representing cultivated soils with spatial heterogeneity on multiple scales were constructed with a high spatial resolution by the interdisciplinary research group INVEST (virtual institute of the Helmholtz Association). At these systems, numerical simulations of water dynamics under different boundary conditions were performed. From the simulation results, datasets of water contents and matric heads, as are recorded in typical field campaigns were extracted. With these data, effective soil hydraulic properties were estimated by 1D inverse simulation, which were then used to predict the water balance. The results showed that measurements, particularly those of water contents, depended strongly on the measuring position and hence led to different estimates of the soil hydraulic properties. Nevertheless, in most cases, the average of the predicted water balances obtained from the 1D simulations and the estimated effective soil hydraulic properties agreed very well with those attained from the 2D systems. In contrast, when using data from only one observation profile, the calculation of the water balance was very uncertain.
Liu, Zong-Xiang; Wu, De-Hui; Xie, Wei-Xin; Li, Liang-Qun
2017-02-15
Tracking the target that maneuvers at a variable turn rate is a challenging problem. The traditional solution for this problem is the use of the switching multiple models technique, which includes several dynamic models with different turn rates for matching the motion mode of the target at each point in time. However, the actual motion mode of a target at any time may be different from all of the dynamic models, because these models are usually limited. To address this problem, we establish a formula for estimating the turn rate of a maneuvering target. By applying the estimation method of the turn rate to the multi-target Bayes (MB) filter, we develop a MB filter with an adaptive estimation of the turn rate, in order to track multiple maneuvering targets. Simulation results indicate that the MB filter with an adaptive estimation of the turn rate, is better than the existing filter at tracking the target that maneuvers at a variable turn rate.
Energy Technology Data Exchange (ETDEWEB)
Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory
2009-01-01
We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.
Hierarchical probing for estimating the trace of the matrix inverse on toroidal lattices
Stathopoulos, Andreas; Orginos, Kostas
2013-01-01
The standard approach for computing the trace of the inverse of a very large, sparse matrix $A$ is to view the trace as the mean value of matrix quadratures, and use the Monte Carlo algorithm to estimate it. This approach is heavily used in our motivating application of Lattice QCD. Often, the elements of $A^{-1}$ display certain decay properties away from the non zero structure of $A$, but random vectors cannot exploit this induced structure of $A^{-1}$. Probing is a technique that, given a sparsity pattern of $A$, discovers elements of $A$ through matrix-vector multiplications with specially designed vectors. In the case of $A^{-1}$, the pattern is obtained by distance-$k$ coloring of the graph of $A$. For sufficiently large $k$, the method produces accurate trace estimates but the cost of producing the colorings becomes prohibitively expensive. More importantly, it is difficult to search for an optimal $k$ value, since none of the work for prior choices of $k$ can be reused.
Hierarchical probing for estimating the trace of the matrix inverse on toroidal lattices
Energy Technology Data Exchange (ETDEWEB)
Stathopoulos, Andreas [College of William and Mary, Williamsburg, VA; Laeuchli, Jesse [College of William and Mary, Williamsburg, VA; Orginos, Kostas [College of William and Mary, Williamsburg, VA; Jefferson Lab
2013-10-01
The standard approach for computing the trace of the inverse of a very large, sparse matrix $A$ is to view the trace as the mean value of matrix quadratures, and use the Monte Carlo algorithm to estimate it. This approach is heavily used in our motivating application of Lattice QCD. Often, the elements of $A^{-1}$ display certain decay properties away from the non zero structure of $A$, but random vectors cannot exploit this induced structure of $A^{-1}$. Probing is a technique that, given a sparsity pattern of $A$, discovers elements of $A$ through matrix-vector multiplications with specially designed vectors. In the case of $A^{-1}$, the pattern is obtained by distance-$k$ coloring of the graph of $A$. For sufficiently large $k$, the method produces accurate trace estimates but the cost of producing the colorings becomes prohibitively expensive. More importantly, it is difficult to search for an optimal $k$ value, since none of the work for prior choices of $k$ can be reused.
Equations of States in Singular Statistical Estimation
Watanabe, Sumio
2007-01-01
Learning machines which have hierarchical structures or hidden variables are singular statistical models because they are nonidentifiable and their Fisher information matrices are singular. In singular statistical models, neither the Bayes a posteriori distribution converges to the normal distribution nor the maximum likelihood estimator satisfies asymptotic normality. This is the main reason why it has been difficult to predict their generalization performances from trained states. In this paper, we study four errors, (1) Bayes generalization error, (2) Bayes training error, (3) Gibbs generalization error, and (4) Gibbs training error, and prove that there are mathematical relations among these errors. The formulas proved in this paper are equations of states in statistical estimation because they hold for any true distribution, any parametric model, and any a priori distribution. Also we show that Bayes and Gibbs generalization errors are estimated by Bayes and Gibbs training errors, and propose widely appl...
Directory of Open Access Journals (Sweden)
Auvinen Petri
2008-01-01
Full Text Available We propose a method for improving the quality of signal from DNA microarrays by using several scans at varying scanner sen-sitivities. A Bayesian latent intensity model is introduced for the analysis of such data. The method improves the accuracy at which expressions can be measured in all ranges and extends the dynamic range of measured gene expression at the high end. Our method is generic and can be applied to data from any organism, for imaging with any scanner that allows varying the laser power, and for extraction with any image analysis software. Results from a self-self hybridization data set illustrate an improved precision in the estimation of the expression of genes compared to what can be achieved by applying standard methods and using only a single scan.
Broadband Ground Motion Estimates for Scenario Earthquakes in the San Francisco Bay Region
Graves, R. W.
2006-12-01
Using broadband (0-10 Hz) simulation procedures, we are assessing the ground motions that could be generated by different earthquake scenarios occurring on major strike-slip faults of the San Francisco Bay region. These simulations explicitly account for several important ground motion features, including rupture directivity, 3D basin response, and the depletion of high frequency ground motions that occurs for surface rupturing events. This work compliments ongoing USGS efforts to quantify the ground shaking hazards throughout the San Francisco Bay region. These efforts involve development and testing of a 3D velocity model for northern California (USGS Bay Area Velocity Model, version 05.1.0) using observations from the 1989 Loma Prieta earthquake, characterization of 1906 rupture scenarios and ground motions, and the development and analysis of rupture scenarios on other Bay Area faults. The adequacy of the simulation model has been tested using ground motion data recorded during the 1989 Loma Prieta earthquake and by comparison with the reported intensity data from the 1906 earthquake. Comparisons of the simulated broadband (0-10 Hz) ground motions with the recorded motions for the 1989 Loma Prieta earthquake demonstrate that the modeling procedure matches the observations without significant bias over a broad range of frequencies, site types, and propagation distances. The Loma Prieta rupture model is based on a wavenumber-squared refinement of the Wald et al (1991) slip distribution, with the rupture velocity set at 75 percent of the local shear wave velocity and a Kostrov-type slip function having a rise time of about 1.4 sec. Simulations of 1906 scenario ruptures indicate very strong directivity effects to the north and south of the assumed epicenter, adjacent to San Francisco. We are currently analyzing additional earthquake scenarios on the Hayward-Rodgers Creek and San Andreas faults in order to provide a more comprehensive framework for assessing
Sakurai, Takeo; Serizawa, Shigeko; Kobayashi, Jun; Kodama, Keita; Lee, Jeong-Hoon; Maki, Hideaki; Zushi, Yasuyuki; Sevilla-Nastor, Janice Beltran; Imaizumi, Yoshitaka; Suzuki, Noriyuki; Horiguchi, Toshihiro; Shiraishi, Hiroaki
2016-01-01
We estimated inflow rates of perfluorooctanesulfonate (PFOS) and perfluorooctanoate (PFOA) to Tokyo Bay, Japan, between February 2004 and February 2011 by a receptor-oriented approach based on quarterly samplings of the bay water. Temporal trends in these inflow rates are an important basis for evaluating changes in PFOS and PFOA emissions in the Tokyo Bay catchment basin. A mixing model estimated the average concentrations of these compounds in the freshwater inflow to the bay, which were then multiplied by estimated freshwater inflow rates to obtain the inflow rates of these compounds. The receptor-oriented approach enabled us to comprehensively cover inflow to the bay, including inflow via direct discharge to the bay. On a logarithmic basis, the rate of inflow for PFOS decreased gradually, particularly after 2006, whereas that for PFOA exhibited a marked stepwise decrease from 2006 to 2007. The rate of inflow for PFOS decreased from 730kg/y during 2004-2006 to 160kg/y in 2010, whereas that for PFOA decreased from 2000kg/y during 2004-2006 to 290kg/y in 2010. These reductions probably reflected reductions in the use and emission of these compounds and their precursors in the Tokyo Bay catchment basin. Our estimated per-person inflow rates (i.e., inflow rates divided by the estimated population in the basin) for PFOS were generally comparable to previously reported per-person waterborne emission rates in Japan and other countries, whereas those for PFOA were generally higher than previously reported per-person waterborne emission rates. A comparison with previous estimates of household emission rates of these compounds suggested that our inflow estimates included a considerable contribution from point industrial sources.
Dorazio, Robert; Karanth, K. Ullas
2017-01-01
MotivationSeveral spatial capture-recapture (SCR) models have been developed to estimate animal abundance by analyzing the detections of individuals in a spatial array of traps. Most of these models do not use the actual dates and times of detection, even though this information is readily available when using continuous-time recorders, such as microphones or motion-activated cameras. Instead most SCR models either partition the period of trap operation into a set of subjectively chosen discrete intervals and ignore multiple detections of the same individual within each interval, or they simply use the frequency of detections during the period of trap operation and ignore the observed times of detection. Both practices make inefficient use of potentially important information in the data.Model and data analysisWe developed a hierarchical SCR model to estimate the spatial distribution and abundance of animals detected with continuous-time recorders. Our model includes two kinds of point processes: a spatial process to specify the distribution of latent activity centers of individuals within the region of sampling and a temporal process to specify temporal patterns in the detections of individuals. We illustrated this SCR model by analyzing spatial and temporal patterns evident in the camera-trap detections of tigers living in and around the Nagarahole Tiger Reserve in India. We also conducted a simulation study to examine the performance of our model when analyzing data sets of greater complexity than the tiger data.BenefitsOur approach provides three important benefits: First, it exploits all of the information in SCR data obtained using continuous-time recorders. Second, it is sufficiently versatile to allow the effects of both space use and behavior of animals to be specified as functions of covariates that vary over space and time. Third, it allows both the spatial distribution and abundance of individuals to be estimated, effectively providing a species
Dorazio, Robert M; Karanth, K Ullas
2017-01-01
Several spatial capture-recapture (SCR) models have been developed to estimate animal abundance by analyzing the detections of individuals in a spatial array of traps. Most of these models do not use the actual dates and times of detection, even though this information is readily available when using continuous-time recorders, such as microphones or motion-activated cameras. Instead most SCR models either partition the period of trap operation into a set of subjectively chosen discrete intervals and ignore multiple detections of the same individual within each interval, or they simply use the frequency of detections during the period of trap operation and ignore the observed times of detection. Both practices make inefficient use of potentially important information in the data. We developed a hierarchical SCR model to estimate the spatial distribution and abundance of animals detected with continuous-time recorders. Our model includes two kinds of point processes: a spatial process to specify the distribution of latent activity centers of individuals within the region of sampling and a temporal process to specify temporal patterns in the detections of individuals. We illustrated this SCR model by analyzing spatial and temporal patterns evident in the camera-trap detections of tigers living in and around the Nagarahole Tiger Reserve in India. We also conducted a simulation study to examine the performance of our model when analyzing data sets of greater complexity than the tiger data. Our approach provides three important benefits: First, it exploits all of the information in SCR data obtained using continuous-time recorders. Second, it is sufficiently versatile to allow the effects of both space use and behavior of animals to be specified as functions of covariates that vary over space and time. Third, it allows both the spatial distribution and abundance of individuals to be estimated, effectively providing a species distribution model, even in cases where
Gartner, J.W.
2004-01-01
The estimation of mass concentration of suspended solids is one of the properties needed to understand the characteristics of sediment transport in bays and estuaries. However, useful measurements or estimates of this property are often problematic when employing the usual methods of determination from collected water samples or optical sensors. Analysis of water samples tends to undersample the highly variable character of suspended solids, and optical sensors often become useless from biological fouling in highly productive regions. Acoustic sensors, such as acoustic Doppler current profilers that are now routinely used to measure water velocity, have been shown to hold promise as a means of quantitatively estimating suspended solids from acoustic backscatter intensity, a parameter used in velocity measurement. To further evaluate application of this technique using commercially available instruments, profiles of suspended solids concentrations are estimated from acoustic backscatter intensity recorded by 1200- and 2400-kHz broadband acoustic Doppler current profilers located at two sites in San Francisco Bay, California. ADCP backscatter intensity is calibrated using optical backscatterance data from an instrument located at a depth close to the ADCP transducers. In addition to losses from spherical spreading and water absorption, calculations of acoustic transmission losses account for attenuation from suspended sediment and correction for nonspherical spreading in the near field of the acoustic transducer. Acoustic estimates of suspended solids consisting of cohesive and noncohesive sediments are found to agree within about 8-10% (of the total range of concentration) to those values estimated by a second optical backscatterance sensor located at a depth further from the ADCP transducers. The success of this approach using commercially available Doppler profilers provides promise that this technique might be appropriate and useful under certain conditions in
Multi-band algorithms for the estimation of chlorophyll concentration in the Chesapeake Bay
Gilerson, Alexander
2015-10-14
Standard blue-green ratio algorithms do not usually work well in turbid productive waters because of the contamination of the blue and green bands by CDOM absorption and scattering by non-algal particles. One of the alternative approaches is based on the two- or three band ratio algorithms in the red/NIR part of the spectrum, which require 665, 708, 753 nm bands (or similar) and which work well in various waters all over the world. The critical 708 nm band for these algorithms is not available on MODIS and VIIRS sensors, which limits applications of this approach. We report on another approach where a combination of the 745nm band with blue-green-red bands was the basis for the new algorithms. A multi-band algorithm which includes ratios Rrs(488)/Rrs(551)and Rrs(671)/Rrs(745) and two band algorithm based on Rrs671/Rrs745 ratio were developed with the main focus on the Chesapeake Bay (USA) waters. These algorithms were tested on the specially developed synthetic datasets, well representing the main relationships between water parameters in the Bay taken from the NASA NOMAD database and available literature, on the field data collected by our group during a 2013 campaign in the Bay, as well as NASA SeaBASS data from the other group and on matchups between satellite imagery and water parameters measured by the Chesapeake Bay program. Our results demonstrate that the coefficient of determination can be as high as R2 > 0.90 for the new algorithms in comparison with R2 = 0.6 for the standard OC3V algorithm on the same field dataset. Substantial improvement was also achieved by applying a similar approach (inclusion of Rrs(667)/Rrs(753) ratio) for MODIS matchups. Results for VIIRS are not yet conclusive. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Cifelli, R.; Chen, H.; Chandra, C. V.
2016-12-01
The San Francisco Bay area is home to over 5 million people. In February 2016, the area also hosted the NFL Super bowl, bringing additional people and focusing national attention to the region. Based on the El Nino forecast, public officials expressed concern for heavy rainfall and flooding with the potential for threats to public safety, costly flood damage to infrastructure, negative impacts to water quality (e.g., combined sewer overflows) and major disruptions in transportation. Mitigation of the negative impacts listed above requires accurate precipitation monitoring (quantitative precipitation estimation-QPE) and prediction (including radar nowcasting). The proximity to terrain and maritime conditions as well as the siting of existing NEXRAD radars are all challenges in providing accurate, short-term near surface rainfall estimates in the Bay area urban region. As part of a collaborative effort between the National Oceanic and Atmospheric Administration (NOAA) Earth System Research Laboratory, Colorado State University (CSU), and Santa Clara Valley Water District (SCVWD), an X-band dual-polarization radar was deployed in Santa Clara Valley in February of 2016 to provide support for the National Weather Service during the Super Bowl and NOAA's El Nino Rapid Response field campaign. This high-resolution radar was deployed on the roof of one of the buildings at the Penitencia Water Treatment Plant. The main goal was to provide detailed precipitation information for use in weather forecasting and assists the water district in their ability to predict rainfall and streamflow with real-time rainfall data over Santa Clara County especially during a potentially large El Nino year. The following figure shows the radar's coverage map, as well as sample reflectivity observations on March 06, 2016, at 00:04UTC. This paper presents results from a pilot study from February, 2016 to May, 2016 demonstrating the use of X-band weather radar for quantitative precipitation
Digital Repository Service at National Institute of Oceanography (India)
Chandramohan, P.; Nayak, B.U.; Raju, N.S.N.
lower values, Gumbel distribution appears to estimate the extreme wave height reasonably well and gives a realistic value for the study region. The extreme wave estimated based only on the monsoon wave data deviated significantly from the estimate based...
Age Estimates of Holocene Glacial Retreat in Lapeyrère Bay, Anvers Island, Antarctica
Mead, K. A.; Wellner, J. S.; Rosenheim, B. E.
2011-12-01
Lapeyrère Bay is a fjord on the eastern side of Anvers Island, located off the Western Antarctic Peninsula. Anvers island has a maximum elevation of 2400m (comprised of ice overlaying bedrock), and experiences colder temperatures and more precipitation than the South Shetlands, which are ~230km to the north. Two glaciers enter Lapeyrère Bay, one large and vulnerable to avalanching, the Iliad Glacier, and one smaller glacier confined to a northern unnamed cove. Though several research cruises have visited Lapeyrère Bay, very little has been published on the fjord's glacial retreat history or sediment flux. The primary purpose of this study is to reconstruct the glacial retreat and sediment flux histories of Lapeyrère Bay using a SHALDRIL core and standard piston cores for chronology and sedimentary facies analysis, and multibeam swath bathymetry data for identifying seafloor morphological features. Preliminary core data from the proximal northern flank of Lapeyrère Bay show greenish grey sandy mud with scattered pebble and sand lens lithology. A core taken in the distal-most part of the fjord is largely diatomaceous sediment grading into grey silty mud with thin sandy turbidites. Multibeam data has exposed seafloor features including a grounding zone wedge at the entrance of the unnamed cove of northern Lapeyrère Bay, drumlins, glacial lineations, and a glacial outwash fan near the ocean-termination of the Iliad glacier. Additionally, this study seeks to assess the effectiveness of a novel 14C method of dating sediment lacking sufficient calcareous material for carbonate 14C dating. The method being tested is ramped pyrolysis radiocarbon analysis, which dates individual fractions of organic material. It is hypothesized that ramped pyrolysis will improve upon bulk acid insoluble organic material (AIOM) dating, as AIOM can include both autochthonous syndepositionally aged carbon and allochthonous pre-aged carbon, resulting in 14C ages inherently older than the
Indian Academy of Sciences (India)
E J D'Sa; C Hu; F E Muller-Karger; K L Carder
2002-09-01
Estimates of water quality variables such as chlorophyll concentration (Chl), colored dissolved organic matter (CDOM), or salinity from satellite sensors are of great interest to resource managers monitoring coastal regions such as the Florida Bay and the Florida Shelf. However, accurate stimates of these variables using standard ocean color algorithms have been di#cult due to the complex nature of the light field in these environments. In this study, we process SeaWiFS satellite data using two recently developed algorithms; one for atmospheric correction and the other a semi-analytic bio-optical algorithm and compare the results with standard SeaWiFS algorithms. Overall, the two algorithms produced more realistic estimates of Chl and CDOM distributions in Florida Shelf and Bay waters. Estimates of surface salinity were obtained from the CDOM absorption field assuming a conservative mixing behavior of these waters. A comparison of SeaWiFS-derived Chl and CDOM absorption with field measurements in the Florida Bay indicated that although well correlated, CDOM was underestimated, while Chl was overestimated. Bottom reflectance appeared to affect these estimates at the shallow central Bay stations during the winter. These results demonstrate the need for new bio-optical algorithms or tuning of the parameters used in the bio-optical algorithm for local conditions encountered in the Bay.
Cross, Paul C.; Maichak, Eric J.; Rogerson, Jared D.; Irvine, Kathryn M.; Jones, Jennifer D; Heisey, Dennis M.; Edwards, William H.; Scurlock, Brandon M.
2015-01-01
Understanding the seasonal timing of disease transmission can lead to more effective control strategies, but the seasonality of transmission is often unknown for pathogens transmitted directly. We inserted vaginal implant transmitters (VITs) in 575 elk (Cervus elaphus canadensis) from 2006 to 2014 to assess when reproductive failures (i.e., abortions or still births) occur, which is the primary transmission route of Brucella abortus, the causative agent of brucellosis in the Greater Yellowstone Ecosystem. Using a survival analysis framework, we developed a Bayesian hierarchical model that simultaneously estimated the total baseline hazard of a reproductive event as well as its 2 mutually exclusive parts (abortions or live births). Approximately, 16% (95% CI = 0.10, 0.23) of the pregnant seropositive elk had reproductive failures, whereas 2% (95% CI = 0.01, 0.04) of the seronegative elk had probable abortions. Reproductive failures could have occurred as early as 13 February and as late as 10 July, peaking from March through May. Model results suggest that less than 5% of likely abortions occurred after 6 June each year and abortions were approximately 5 times more likely in March, April, or May compared to February or June. In western Wyoming, supplemental feeding of elk begins in December and ends during the peak of elk abortions and brucellosis transmission (i.e., Mar and Apr). Years with more snow may enhance elk-to-elk transmission on supplemental feeding areas because elk are artificially aggregated for the majority of the transmission season. Elk-to-cattle transmission will depend on the transmission period relative to the end of the supplemental feeding season, elk seroprevalence, population size, and the amount of commingling. Our statistical approach allowed us to estimate the probability density function of different event types over time, which may be applicable to other cause-specific survival analyses. It is often challenging to assess the
Poor, Noreen D; Pribble, J Raymond; Schwede, Donna B
2013-01-01
The US. Environmental Protection Agency (EPA) has developed the Watershed Deposition Tool (WDT) to calculate from the Community Multiscale Air Quality (CMAQ) model output the nitrogen, sulfur and mercury deposition rates to watersheds and their sub-basins. The CMAQ model simulates from first principles the transport, transformation, and removal of atmospheric pollutants. We applied WDT to estimate the atmospheric deposition of reactive nitrogen (N) to Tampa Bay and its watershed. For 2002 and within the boundaries of Tampa Bay's watershed, modeled atmospheric deposition rates averaged 13.3 kg N ha(-1) yr(-1) and ranged from 6.24 kg N ha(-1) yr(-1) at the bay's boundary with Gulf of Mexico to 21.4 kg N ha(-1) yr(-1) near Tampa's urban core, based on a 12-km x 12-km grid cell size. CMAQ-predicted loading rates were 1,080 metric tons N yr(-1) to Tampa Bay and 8,280 metric tons N yr(-1) to the land portion of its watershed. If we assume a watershed-to-bay transfer rate of 18% for indirect loading, our estimates of the 2002 direct and indirect loading rates to Tampa Bay were 1,080 metric tons N and 1,490 metric tons N, respectively, for an atmospheric loading of 2,570 metric tons N or 71% of the total N loading to Tampa Bay. To evaluate the potential impact of the US. EPA Clean Air Interstate Rule (CAIR, replaced with Cross-State Air Pollution Rule), Tier 2 Vehicle and Gasoline Sulfur Rules, Heavy Duty Highway Rule, and Non-Road Diesel Rule, we compared CMAQ outputs between 2020 and 2002 simulations, with only the emissions inventories changed. The CMAQ-projected change in atmospheric loading rates between these emissions inventories was 857 metric tons N to Tampa Bay, or about 24% of the 2002 loading of 3,640 metric tons N to Tampa Bay from all sources. Air quality modeling reveals that atmospheric deposition of reactive nitrogen (N) contributes a significant fraction to Tampa Bay's total N loading from external sources. Regulatory drivers that lower nitrogen oxide
EnviroAtlas - Green Bay, WI - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
Bayes filter modification for drivability map estimation with observations from stereo vision
Panchenko, Aleksei; Prun, Viktor; Turchenkov, Dmitri
2017-02-01
Reconstruction of a drivability map for a moving vehicle is a well-known research topic in applied robotics. Here creating such a map for an autonomous truck on a generally planar surface containing separate obstacles is considered. The source of measurements for the truck is a calibrated pair of cameras. The stereo system detects and reconstructs several types of objects, such as road borders, other vehicles, pedestrians and general tall objects or highly saturated objects (e.g. road cone). For creating a robust mapping module we use a modification of Bayes filtering, which introduces some novel techniques for occupancy map update step. Specifically, our modified version becomes applicable to the presence of false positive measurement errors, stereo shading and obstacle occlusion. We implemented the technique and achieved real-time 15 FPS computations on an industrial shake proof PC. Our real world experiments show the positive effect of the filtering step.
Institute of Scientific and Technical Information of China (English)
袁平; 丁峰
2008-01-01
利用Kronecker积,推导出多变量ARX-like随机系统的辨识模型,使用递阶辨识原理研制了一个递阶最小二乘参数估计算法.提出的递阶最小二乘算法比现存递推最小二乘算法计算量小.给出了为仿真例子.%By using the Kronecker product,An identification model for multivariable ARX-like stochastic systems is derived and developed a hierarchical least squares parameter estimation algorithm by the hierarchical identification principle．The proposed algorithm has less computational eorts than the recursive least squares algorithm.A simulation example is included.
Lietz, Arthur C.
1999-01-01
Biscayne Bay is an oligotrophic, subtropical estuary located along the southeastern coast of Florida that provides habitat for a variety of plant and animal life. Concern has arisen with regard to the ecological health of Biscayne Bay because of the presence of nutrient-laden discharges from the east coast canals that drain into the bay. This concern, as well as planned diversion of discharges for ecosystem restoration from the urban and agricultural corridors of Miami-Dade County to Everglades National Park, served as the impetus for a study conducted during the 1996 and 1997 water years to estimate nutrient loads discharged from the east coast canals into Biscayne Bay. Analytical results indicated that the highest concentration of any individual nutrient sampled for in the study was 4.38 mg/L (milligrams per liter) for nitrate at one site, and the lowest concentrations determined were below the detection limits for orthophosphate at six sites and nitrite at four sites. Median concentrations for all the sites were 0.75 mg/L for total organic nitrogen, 0.10 mg/L for ammonia, 0.02 mg/L for nitrite, 0.18 mg/L for nitrate, 0.20 mg/L for nitrite plus nitrate nitrogen, 0.02 mg/L for total phosphorus, and 0.005 mg/L for orthophosphate. The maximum total phosphorus concentration of 0.31 mg/L was the only nutrient concentration to exceed U.S. Environmental Protection Agency (1986) water-quality criteria. High concentrations of total phosphorus usually reflect contamination as a result of human activities. Five sites exceeded the fresh-water quality standard of 0.5 mg/L for ammonia concentration as determined by the Miami-Dade County Department of Environmental Resources Management. Median total organic nitrogen concentrations were higher in urban and forested/wetland areas than in agricultural areas; median concentrations of nitrite, nitrate, and nitrite plus nitrate nitrogen were higher in agricultural areas than in urban and forested/wetland areas; and ammonia, total
DEFF Research Database (Denmark)
Nielsen, Anders; Lewy, Peter
2002-01-01
A simulation study was carried out for a separable fish stock assessment model including commercial and survey catch-at-age and effort data. All catches are considered stochastic variables subject to sampling and process variations. The results showed that the Bayes estimator of spawning biomass ...
Mann, Roger, Steve Jordan, Gary Smith, Kennedy Paynter, James Wesson, Mary Christman, Jessica Vanisko, Juliana Harding, Kelly Greenhawk and Melissa Southworth. 2003. Oyster Population Estimation in Support of the Ten-Year Goal for Oyster Restoration in the Chesapeake Bay: Develop...
Digital Repository Service at National Institute of Oceanography (India)
Shenoi, S.S.C.; Shankar, D.; Shetye, S.R.
The accuracy of data from the Simple Ocean Data Assimilation (SODA) model for estimating the heat budget of the upper ocean is tested in the Arabian Sea and the Bay of Bengal. SODA is able to reproduce the changes in heat content when...
Institute of Scientific and Technical Information of China (English)
XUE Ying; REN Yiping; MENG Wenrong; LI Long; MAO Xia; HAN Dongyan; MA Qiuyun
2013-01-01
Cephalopods play key roles in global marine ecosystems as both predators and preys.Regressive estimation of original size and weight of cephalopod from beak measurements is a powerful tool of interrogating the feeding ecology of predators at higher trophic levels.In this study,regressive relationships among beak measurements and body length and weight were determined for an octopus species (Octopus variabilis),an important endemic cephalopod species in the northwest Pacific Ocean.A total of 193 individuals (63 males and 130 females) were collected at a monthly interval from Jiaozhou Bay,China.Regressive relationships among 6 beak measurements (upper hood length,UHL; upper crest length,UCL; lower hood length,LHL; lower crest length,LCL; and upper and lower beak weights) and mantle length (ML),total length (TL) and body weight (W) were determined.Results showed that the relationships between beak size and TL and beak size and ML were linearly regressive,while those between beak size and W fitted a power function model.LHL and UCL were the most useful measurements for estimating the size and biomass of O.variabilis.The relationships among beak measurements and body length (either ML or TL) were not significantly different between two sexes; while those among several beak measurements (UHL,LHL and LBW) and body weight (W) were sexually different.Since male individuals of this species have a slightly greater body weight distribution than female individuals,the body weight was not an appropriate measurement for estimating size and biomass,especially when the sex of individuals in the stomachs of predators was unknown.These relationships provided essential information for future use in size and biomass estimation of O.variabilis,as well as the estimation of predator/prey size ratios in the diet of top predators.
Xue, Ying; Ren, Yiping; Meng, Wenrong; Li, Long; Mao, Xia; Han, Dongyan; Ma, Qiuyun
2013-09-01
Cephalopods play key roles in global marine ecosystems as both predators and preys. Regressive estimation of original size and weight of cephalopod from beak measurements is a powerful tool of interrogating the feeding ecology of predators at higher trophic levels. In this study, regressive relationships among beak measurements and body length and weight were determined for an octopus species ( Octopus variabilis), an important endemic cephalopod species in the northwest Pacific Ocean. A total of 193 individuals (63 males and 130 females) were collected at a monthly interval from Jiaozhou Bay, China. Regressive relationships among 6 beak measurements (upper hood length, UHL; upper crest length, UCL; lower hood length, LHL; lower crest length, LCL; and upper and lower beak weights) and mantle length (ML), total length (TL) and body weight (W) were determined. Results showed that the relationships between beak size and TL and beak size and ML were linearly regressive, while those between beak size and W fitted a power function model. LHL and UCL were the most useful measurements for estimating the size and biomass of O. variabilis. The relationships among beak measurements and body length (either ML or TL) were not significantly different between two sexes; while those among several beak measurements (UHL, LHL and LBW) and body weight (W) were sexually different. Since male individuals of this species have a slightly greater body weight distribution than female individuals, the body weight was not an appropriate measurement for estimating size and biomass, especially when the sex of individuals in the stomachs of predators was unknown. These relationships provided essential information for future use in size and biomass estimation of O. variabilis, as well as the estimation of predator/prey size ratios in the diet of top predators.
Xu, Peng; Mao, Xinyan; Jiang, Wensheng
2017-05-01
Three independent methods, the dynamical balance (DB) method, the turbulence parameter (TP) method, and the log-layer fit (LF) method, are commonly employed to estimate the bottom stress and bottom drag coefficient in strong tidal systems. However, their results usually differ from each other and the differences are attributed to form drag. Alternatively, some researchers argued that the differences are caused by overestimates in some methods. Aiming to measure the performances of the three independent methods, they were simultaneously constructed in a bay with highly asymmetric tides. The results of the DB and TP methods are consistent with each other in not only the magnitude but also time variation patterns. The consistency of results of the two methods indicates that skin friction is dominant in the bay. The results of the DB and TP methods reveal obvious flood-dominant asymmetry caused by tidal straining. This flood-dominant asymmetry is enhanced during the transition period from spring to neap tide. When the original log-layer fit is employed, the results are much larger than those of the DB and TP methods, and these differences cannot be attributed to form drag since skin friction is dominant in the bay. Moreover, the results of the original log-layer fit reveal an obvious ebb-dominant asymmetry, which is contradictory to the results of the DB and TP methods. Therefore, the results of the original fit are just overestimates and lack physical meaning. By considering the effect of stratification on the mixing length, the modified log-layer fit achieves results with magnitudes that are close to those of the DB and TP methods, indicating that the modified log-layer fit is more representative of the bottom stress than the original log-layer fit in terms of physical meaning. However, the results of the modified log-layer fit still exhibit an ebb-dominant asymmetry in contrast to that of the DB and TP methods, implying that the empirical formula of the mixing
EnviroAtlas - Green Bay, WI - Estimated Percent Green Space Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates green space along walkable roads. Green space within 25 meters of the road centerline is included and the percentage is based on...
EnviroAtlas - Green Bay, WI - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
Kabiri, Keivan; Moradi, Masoud
2016-08-01
This study examined the advantages of incorporating the new band of Landsat-8 OLI imagery (band 1: Coastal/Aerosol, 435-451 nm) to a model for estimation of Secchi disk depth (SDD) values (as an indicator for transparency) in near-shore coastal waters using multispectral bands. In doing so, Chabahar Bay in the southern part of Iran (north of Gulf of Oman) was selected as the study area. Two approximately four-hour in-situ observations (including 48 and 56 field measured SDD values for each date respectively) were performed in the study area using Secchi disk; this was designed to start about two hours before and end about two hours after the time of satellite overpasses. Thereafter, a model was formed for estimation of SDD values based on the terms including all possible linear and mutual ratio values of Coastal/Aerosol (B1), Blue (B2), Green (B3), and Red bands (B4). In the first step, the correlation between reflectance/ratio reflectance values of these bands and Ln(SDD) values were calculated to indicate higher correlated bands/band ratios with the first field measured SDD values. Consequently, 17 combinations of highest correlated bands/band ratios were selected to estimate SDD values. In this regard, 32 points among the 48 field observations were selected to determine unknown coefficients of models using a multiple linear regression, and the rest 16 points were designated for accuracy assessment the results. Eventually, the measured SDD values in second field observations were utilized for validating the results. Final results demonstrated that combination of linear terms including B1, B2 and B3 bands and band ratio terms including ratio reflectance values of B4/B3, B3/B1, and B2/B1 has led to obtain the highest accuracy (R2=0.866 and RMSE=0.919, SVM feature weight=4.294). This was in agreement with the results obtained from the second observations. Finally, by applying the entire 104 field observed SDD values, the model in form of SDD=0.077exp(1.209RB1
Bayes and empirical Bayes: do they merge?
Petrone, Sonia; Scricciolo, Catia
2012-01-01
Bayesian inference is attractive for its coherence and good frequentist properties. However, it is a common experience that eliciting a honest prior may be difficult and, in practice, people often take an {\\em empirical Bayes} approach, plugging empirical estimates of the prior hyperparameters into the posterior distribution. Even if not rigorously justified, the underlying idea is that, when the sample size is large, empirical Bayes leads to "similar" inferential answers. Yet, precise mathematical results seem to be missing. In this work, we give a more rigorous justification in terms of merging of Bayes and empirical Bayes posterior distributions. We consider two notions of merging: Bayesian weak merging and frequentist merging in total variation. Since weak merging is related to consistency, we provide sufficient conditions for consistency of empirical Bayes posteriors. Also, we show that, under regularity conditions, the empirical Bayes procedure asymptotically selects the value of the hyperparameter for ...
Directory of Open Access Journals (Sweden)
Ina C Ansmann
Full Text Available Moreton Bay, Queensland, Australia is an area of high biodiversity and conservation value and home to two sympatric sub-populations of Indo-Pacific bottlenose dolphins (Tursiops aduncus. These dolphins live in close proximity to major urban developments. Successful management requires information regarding their abundance. Here, we estimate total and effective population sizes of bottlenose dolphins in Moreton Bay using photo-identification and genetic data collected during boat-based surveys in 2008-2010. Abundance (N was estimated using open population mark-recapture models based on sighting histories of distinctive individuals. Effective population size (Ne was estimated using the linkage disequilibrium method based on nuclear genetic data at 20 microsatellite markers in skin samples, and corrected for bias caused by overlapping generations (Ne c. A total of 174 sightings of dolphin groups were recorded and 365 different individuals identified. Over the whole of Moreton Bay, a population size N of 554 ± 22.2 (SE (95% CI: 510-598 was estimated. The southern bay sub-population was small at an estimated N = 193 ± 6.4 (SE (95% CI: 181-207, while the North sub-population was more numerous, with 446 ± 56 (SE (95% CI: 336-556 individuals. The small estimated effective population size of the southern sub-population (Ne c = 56, 95% CI: 33-128 raises conservation concerns. A power analysis suggested that to reliably detect small (5% declines in size of this population would require substantial survey effort (>4 years of annual mark-recapture surveys at the precision levels achieved here. To ensure that ecological as well as genetic diversity within this population of bottlenose dolphins is preserved, we consider that North and South sub-populations should be treated as separate management units. Systematic surveys over smaller areas holding locally-adapted sub-populations are suggested as an alternative method for increasing ability to detect
Digital Repository Service at National Institute of Oceanography (India)
Gauns, M.; Madhupratap, M.; Ramaiah, N.; Jyothibabu, R.; Fernandes, V.; Paul, J.T.; PrasannaKumar, S.
into the surface waters thereby reducing the primary production in the Bay of Bengal. The total living carbon content in the Bay of Bengal is much lower than in the Arabian Sea. Higher downward fluxes associated with deep mixed layer and high production...
In the mid-1990s the Tampa Bay Estuary Program proposed a nutrient reduction strategy focused on improving water clarity to promote seagrass expansion within Tampa Bay. A System Dynamics Model is being developed to evaluate spatially and temporally explicit impacts of nutrient r...
Xu, Zhen; Kim, Duk-jin; Kim, Seung Hee; Cho, Yang-Ki; Lee, Sun-Gu
2016-12-01
Morphologic and topographic changes of tidal flats can be key indicators for monitoring environmental changes and sea level rise. Recently, a number of studies have been performed to estimate temporal topographic changes in tidal flats based on the waterline method using a number of remote sensing data that were acquired at different tidal heights. However, the effect of seasonal variation has not been taken into consideration, nor been understood so far. In this study, 18 scenes of Landsat TM and ETM+ data, covering the period 2003-2004, and corresponding tidal gauge observation data, were used to estimate seasonal topographic variations in two major tidal flats in Gomso and Hampyeong Bay in the southern part of the west sea of South Korea, using the waterline method. Our results showed that the summer deposition was dominant in Gomso Bay with overall average seasonal topographic increase of approximately 18.6 cm. In contrast, Hampyeong Bay showed more dominant summer erosion with overall average seasonal topographic subsidence of about 5.0 cm. In addition, the net overall sedimentation budget was estimated as 6,308,047 m3 and -2,210,986 m3 in Gomso and Hampyeong Bay, respectively. The results also indicates that although both bays of Gomso and Hampyeong are classified as semi-enclosed tidal flat, the sedimentary facies caused by formation geometry and sediment type led to different topographic changes. The results demonstrate that the amount of seasonal topographic variations is not negligible and are expected to improve the accuracy of topographic change derived by the waterline method.
Directory of Open Access Journals (Sweden)
Gökhan Tamer Kayaalp
2016-02-01
Full Text Available The study was carried out to estimate the temperature, light intensity, salinity, Dissolved O2 (DO, pH values and the biotic parameter chlorophyll- a in the water column related with the depth. Because, the physico-chemical parameters affect greatly both primary and secondary producers in marine life. For this purpose the physico-chemical properties were determined day and night for 40 meter depth during the eight days. The means were compared by using the analysis of variance method and Duncan’s Multiple Comparison Test. Also physico-chemical parameters were estimated by using the analysis of regression and correlation. The effect of temperature and salinity were found significant according to the result of the analysis of variance during the day. Also the similar results were found for the night. While the effect of the depth on the chloropyll-a a was significant in the night, the effect of the depth on the DO was not significant in the day and night. The correlations among the depth and the parameters were defined. It was found the negative correlation between the depth and the temperature and light intensity. Determination coefficient of the model for salinity was also found different for day time. The correlation values among the depth and the temperature, salinity and pH were found different for the night.
Charvat, Hadrien; Remontet, Laurent; Bossard, Nadine; Roche, Laurent; Dejardin, Olivier; Rachet, Bernard; Launoy, Guy; Belot, Aurélien
2016-08-15
The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Sarah K. Proceviat
2003-04-01
Full Text Available An arboreal lichen index to be utilized in assessing woodland caribou habitat throughout northeastern Ontario was developed. The "index" was comprised of 5 classes, which differentiated arboreal lichen biomass on black spruce trees, ranging from maximal quantities of arboreal lichen (class 5 to minimal amounts of arboreal lichen (class 1. This arboreal lichen index was subsequently used to estimate the biomass of arboreal lichen available to woodland caribou on lowland black spruce sites ranging in age from 1 year to 150 years post-harvest. A total of 39 sites were assessed and significant differences in arboreal lichen biomass were found, with a positive linear relationship between arboreal lichen biomass and forest age. It is proposed that the index be utilized by government and industry as a means of assessing the suitability of lowland black spruce habitat for woodland caribou in this region.
Energy Technology Data Exchange (ETDEWEB)
Sanfilippo, Antonio P.; Posse, Christian; Gopalan, Banu; Riensche, Roderick M.; Beagley, Nathaniel; Baddeley, Bob L.; Tratz, Stephen C.; Gregory, Michelle L.
2007-03-01
Gene and gene product similarity is a fundamental diagnostic measure in analyzing biological data and constructing predictive models for functional genomics. With the rising influence of the Gene Ontology, two complementary approaches have emerged where the similarity between two genes or gene products is obtained by comparing Gene Ontology (GO) annotations associated with the genes or gene products. One approach captures GO-based similarity in terms of hierarchical relations within each gene subontology. The other approach identifies GO-based similarity in terms of associative relations across the three gene subontologies. We propose a novel methodology where the two approaches can be merged with ensuing benefits in coverage and accuracy, and demonstrate that further improvements can be obtained by integrating textual evidence extracted from relevant biomedical literature.
Kim, Jang-Gyeong; Kwon, Hyun-Han; Kim, Dongkyun
2017-01-01
Poisson cluster stochastic rainfall generators (e.g., modified Bartlett-Lewis rectangular pulse, MBLRP) have been widely applied to generate synthetic sub-daily rainfall sequences. The MBLRP model reproduces the underlying distribution of the rainfall generating process. The existing optimization techniques are typically based on individual parameter estimates that treat each parameter as independent. However, parameter estimates sometimes compensate for the estimates of other parameters, which can cause high variability in the results if the covariance structure is not formally considered. Moreover, uncertainty associated with model parameters in the MBLRP rainfall generator is not usually addressed properly. Here, we develop a hierarchical Bayesian model (HBM)-based MBLRP model to jointly estimate parameters across weather stations and explicitly consider the covariance and uncertainty through a Bayesian framework. The model is tested using weather stations in South Korea. The HBM-based MBLRP model improves the identification of parameters with better reproduction of rainfall statistics at various temporal scales. Additionally, the spatial variability of the parameters across weather stations is substantially reduced compared to that of other methods.
Directory of Open Access Journals (Sweden)
Pedro P Olea
Full Text Available BACKGROUND: Hierarchical partitioning (HP is an analytical method of multiple regression that identifies the most likely causal factors while alleviating multicollinearity problems. Its use is increasing in ecology and conservation by its usefulness for complementing multiple regression analysis. A public-domain software "hier.part package" has been developed for running HP in R software. Its authors highlight a "minor rounding error" for hierarchies constructed from >9 variables, however potential bias by using this module has not yet been examined. Knowing this bias is pivotal because, for example, the ranking obtained in HP is being used as a criterion for establishing priorities of conservation. METHODOLOGY/PRINCIPAL FINDINGS: Using numerical simulations and two real examples, we assessed the robustness of this HP module in relation to the order the variables have in the analysis. Results indicated a considerable effect of the variable order on the amount of independent variance explained by predictors for models with >9 explanatory variables. For these models the nominal ranking of importance of the predictors changed with variable order, i.e. predictors declared important by its contribution in explaining the response variable frequently changed to be either most or less important with other variable orders. The probability of changing position of a variable was best explained by the difference in independent explanatory power between that variable and the previous one in the nominal ranking of importance. The lesser is this difference, the more likely is the change of position. CONCLUSIONS/SIGNIFICANCE: HP should be applied with caution when more than 9 explanatory variables are used to know ranking of covariate importance. The explained variance is not a useful parameter to use in models with more than 9 independent variables. The inconsistency in the results obtained by HP should be considered in future studies as well as in those
Schweig, E. S.; Muhs, D. R.; Simmons, K. R.; Halley, R. B.
2015-12-01
Guantanamo Bay, Cuba is an area dominated by a strike-slip tectonic regime and is therefore expected to have very low Quaternary uplift rates. We tested this hypothesis by study of an unusually well preserved emergent reef terrace around the bay. Up to 12 m of unaltered, growth-position reef corals are exposed at about 40 sections examined around ˜40 km of coastline. Maximum reef elevations in the protected, inner part of the bay are ˜11-12 m, whereas outer-coast shoreline angles of wave-cut benches are as high as ˜14 m. Fifty uranium-series analyses of unrecrystallized corals from six localities yield ages ranging from ˜134 ka to ˜115 ka, when adjusted for small biases due to slightly elevated initial 234U/238U values. Thus, ages of corals correlate this reef to the peak of the last interglacial period, marine isotope stage (MIS) 5.5. Previously, we dated the Key Largo Limestone to the same high-sea stand in the tectonically stable Florida Keys. Estimates of paleo-sea level during MIS 5.5 in the Florida Keys are ~6.6 to 8.3 m above present. Assuming a similar paleo-sea level in Cuba, this yields a long-term tectonic uplift rate of 0.04-0.06 m/ka over the past ~120 ka. This estimate supports the hypothesis that the tectonic uplift rate should be low in this strike-slip regime. Nevertheless, on the southeast coast of Cuba, east of our study area, we have observed flights of multiple marine terraces, suggesting either (1) a higher uplift rate or (2) an unusually well-preserved record of pre-MIS 5.5 terraces not observed at Guantanamo Bay.
Li, Xin; Yu, Jiaguo; Jaroniec, Mietek
2016-05-01
As a green and sustainable technology, semiconductor-based heterogeneous photocatalysis has received much attention in the last few decades because it has potential to solve both energy and environmental problems. To achieve efficient photocatalysts, various hierarchical semiconductors have been designed and fabricated at the micro/nanometer scale in recent years. This review presents a critical appraisal of fabrication methods, growth mechanisms and applications of advanced hierarchical photocatalysts. Especially, the different synthesis strategies such as two-step templating, in situ template-sacrificial dissolution, self-templating method, in situ template-free assembly, chemically induced self-transformation and post-synthesis treatment are highlighted. Finally, some important applications including photocatalytic degradation of pollutants, photocatalytic H2 production and photocatalytic CO2 reduction are reviewed. A thorough assessment of the progress made in photocatalysis may open new opportunities in designing highly effective hierarchical photocatalysts for advanced applications ranging from thermal catalysis, separation and purification processes to solar cells.
Werner, Benjamin; Scott, Jacob G; Sottoriva, Andrea; Anderson, Alexander R A; Traulsen, Arne; Altrock, Philipp M
2016-04-01
Many tumors are hierarchically organized and driven by a subpopulation of tumor-initiating cells (TIC), or cancer stem cells. TICs are uniquely capable of recapitulating the tumor and are thought to be highly resistant to radio- and chemotherapy. Macroscopic patterns of tumor expansion before treatment and tumor regression during treatment are tied to the dynamics of TICs. Until now, the quantitative information about the fraction of TICs from macroscopic tumor burden trajectories could not be inferred. In this study, we generated a quantitative method based on a mathematical model that describes hierarchically organized tumor dynamics and patient-derived tumor burden information. The method identifies two characteristic equilibrium TIC regimes during expansion and regression. We show that tumor expansion and regression curves can be leveraged to infer estimates of the TIC fraction in individual patients at detection and after continued therapy. Furthermore, our method is parameter-free; it solely requires the knowledge of a patient's tumor burden over multiple time points to reveal microscopic properties of the malignancy. We demonstrate proof of concept in the case of chronic myeloid leukemia (CML), wherein our model recapitulated the clinical history of the disease in two independent patient cohorts. On the basis of patient-specific treatment responses in CML, we predict that after one year of targeted treatment, the fraction of TICs increases 100-fold and continues to increase up to 1,000-fold after 5 years of treatment. Our novel framework may significantly influence the implementation of personalized treatment strategies and has the potential for rapid translation into the clinic. Cancer Res; 76(7); 1705-13. ©2016 AACR.
Digital Repository Service at National Institute of Oceanography (India)
Majumdar, T.J.; Bhattacharyya, R.; Chatterjee, S.; Krishna, K.S.
height anomalies have been analyzed across the Ninetyeast and 85 degrees E Ridges within the Bay of Bengal. Present data sets are more accurate and detailed (off-track resolution: about 3.33 km and grid size: about 3.5 km). Observed geoid height - age...
Moran, Emily V; Clark, James S
2011-03-01
The scale of seed and pollen movement in plants has a critical influence on population dynamics and interspecific interactions, as well as on their capacity to respond to environmental change through migration or local adaptation. However, dispersal can be challenging to quantify. Here, we present a Bayesian model that integrates genetic and ecological data to simultaneously estimate effective seed and pollen dispersal parameters and the parentage of sampled seedlings. This model is the first developed for monoecious plants that accounts for genotyping error and treats dispersal from within and beyond a plot in a fully consistent manner. The flexible Bayesian framework allows the incorporation of a variety of ecological variables, including individual variation in seed production, as well as multiple sources of uncertainty. We illustrate the method using data from a mixed population of red oak (Quercus rubra, Q. velutina, Q. falcata) in the NC piedmont. For simulated test data sets, the model successfully recovered the simulated dispersal parameters and pedigrees. Pollen dispersal in the example population was extensive, with an average father-mother distance of 178 m. Estimated seed dispersal distances at the piedmont site were substantially longer than previous estimates based on seed-trap data (average 128 m vs. 9.3 m), suggesting that, under some circumstances, oaks may be less dispersal-limited than is commonly thought, with a greater potential for range shifts in response to climate change.
Institute of Scientific and Technical Information of China (English)
饶贤清
2012-01-01
讨论了Pareto分布在平方损失下参数的Bayes估计,采用同分布负相协样本的核估计方法讨论了参数的经验Bayes（EB）估计问题,并计算了给定条件下参数的经验Bayes估计的收敛速度。最后,对我国高收入阶层的财富分布情况进行了实证分析,实证分析表明我国高收入阶层的财富分布是可以用Pareto分布来描述的。%In this paper, the Bayes estimation of the Pareto distribution has been derived and the empirical Bayes （EB） estimator is constructed by using the identically distributed and negatively associated （NA） samples. Further, the convergence rate of the EB estimator is shown under suitable conditions. At last, we test that the wealth distribution of high income earners in China can be described with Pareto distribution.
Hierarchical Bayes Models for Response Time Data
Craigmile, Peter F.; Peruggia, Mario; Van Zandt, Trisha
2010-01-01
Human response time (RT) data are widely used in experimental psychology to evaluate theories of mental processing. Typically, the data constitute the times taken by a subject to react to a succession of stimuli under varying experimental conditions. Because of the sequential nature of the experiments there are trends (due to learning, fatigue,…
Directory of Open Access Journals (Sweden)
Baljuk J.A.
2014-12-01
Full Text Available In work the algorithm of adaptive strategy of optimum spatial sampling for studying of the spatial organisation of communities of soil animals in the conditions of an urbanization have been presented. As operating variables the principal components obtained as a result of the analysis of the field data on soil penetration resistance, soils electrical conductivity and density of a forest stand, collected on a quasiregular grid have been used. The locations of experimental polygons have been stated by means of program ESAP. The sampling has been made on a regular grid within experimental polygons. The biogeocoenological estimation of experimental polygons have been made on a basis of A.L.Belgard's ecomorphic analysis. The spatial configuration of biogeocoenosis types has been established on the basis of the data of earth remote sensing and the analysis of digital elevation model. The algorithm was suggested which allows to reveal the spatial organisation of soil animal communities at investigated point, biogeocoenosis, and landscape.
Institute of Scientific and Technical Information of China (English)
刘婉贞
2013-01-01
The paper aims to study the Bayes estimation of the parameter of the generalized Parato distribu-tion, basing on the parameter prior which is inverse Gamma prior distribution. Bayes estimators are obtained by us-ing the squared error loss function. The author also compares the MLE estimator with the Bayes estimator through Monte Carlo simulation.%本文在参数的先验分布为逆伽玛先验分布条件下研究广义Pareto分布参数的Bayes估计问题，并在平方误差损失函数下，导出了参数的Bayes估计。文末通过Monte Carlo数值模拟试验对极大似然估计和Bayes估计进行了比较。
Modeling hierarchical structures - Hierarchical Linear Modeling using MPlus
Jelonek, M
2006-01-01
The aim of this paper is to present the technique (and its linkage with physics) of overcoming problems connected to modeling social structures, which are typically hierarchical. Hierarchical Linear Models provide a conceptual and statistical mechanism for drawing conclusions regarding the influence of phenomena at different levels of analysis. In the social sciences it is used to analyze many problems such as educational, organizational or market dilemma. This paper introduces the logic of modeling hierarchical linear equations and estimation based on MPlus software. I present my own model to illustrate the impact of different factors on school acceptation level.
Boyra, Guillermo
2013-08-16
A series of acoustic surveys (JUVENA) began in 2003 targeting juvenile anchovy (Engraulis encrasicolus) in the Bay of Biscay. A specific methodology was designed for mapping and estimating juvenile abundance annually, four months after the spawning season. After eight years of the survey, a consistent picture of the spatial pattern of the juvenile anchovy has emerged. Juveniles show a vertical and horizontal distribution pattern that depends on size. The younger individuals are found isolated from other species in waters closer to the surface, mainly off the shelf within the mid-southern region of the bay. The largest juveniles are usually found deeper and closer to the shore in the company of adult anchovy and other pelagic species. In these eight years, the survey has covered a wide range of juvenile abundances, and the estimates show a significant positive relationship between the juvenile biomasses and the one-year-old recruits of the following year. This demonstrates that the JUVENA index provides an early indication of the strength of next year\\'s recruitment to the fishery and can therefore be used to improve the management advice for the fishery of this short-lived species. © 2013 International Council for the Exploration of the Sea.
Stewart, David R.; Long, James M.
2015-01-01
Species distribution models are useful tools to evaluate habitat relationships of fishes. We used hierarchical Bayesian multispecies mixture models to evaluate the relationships of both detection and abundance with habitat of reservoir fishes caught using tandem hoop nets. A total of 7,212 fish from 12 species were captured, and the majority of the catch was composed of Channel Catfish Ictalurus punctatus (46%), Bluegill Lepomis macrochirus(25%), and White Crappie Pomoxis annularis (14%). Detection estimates ranged from 8% to 69%, and modeling results suggested that fishes were primarily influenced by reservoir size and context, water clarity and temperature, and land-use types. Species were differentially abundant within and among habitat types, and some fishes were found to be more abundant in turbid, less impacted (e.g., by urbanization and agriculture) reservoirs with longer shoreline lengths; whereas, other species were found more often in clear, nutrient-rich impoundments that had generally shorter shoreline length and were surrounded by a higher percentage of agricultural land. Our results demonstrated that habitat and reservoir characteristics may differentially benefit species and assemblage structure. This study provides a useful framework for evaluating capture efficiency for not only hoop nets but other gear types used to sample fishes in reservoirs.
Seichter, Felicia; Vogt, Josef; Radermacher, Peter; Mizaikoff, Boris
2017-01-25
The calibration of analytical systems is time-consuming and the effort for daily calibration routines should therefore be minimized, while maintaining the analytical accuracy and precision. The 'calibration transfer' approach proposes to combine calibration data already recorded with actual calibrations measurements. However, this strategy was developed for the multivariate, linear analysis of spectroscopic data, and thus, cannot be applied to sensors with a single response channel and/or a non-linear relationship between signal and desired analytical concentration. To fill this gap for a non-linear calibration equation, we assume that the coefficients for the equation, collected over several calibration runs, are normally distributed. Considering that coefficients of an actual calibration are a sample of this distribution, only a few standards are needed for a complete calibration data set. The resulting calibration transfer approach is demonstrated for a fluorescence oxygen sensor and implemented as a hierarchical Bayesian model, combined with a Lagrange Multipliers technique and Monte-Carlo Markov-Chain sampling. The latter provides realistic estimates for coefficients and prediction together with accurate error bounds by simulating known measurement errors and system fluctuations. Performance criteria for validation and optimal selection of a reduced set of calibration samples were developed and lead to a setup which maintains the analytical performance of a full calibration. Strategies for a rapid determination of problems occurring in a daily calibration routine, are proposed, thereby opening the possibility of correcting the problem just in time.
Hsu, C.; Cifelli, R.; Zamora, R. J.; Schneider, T.
2014-12-01
The PRISM monthly climatology has been widely used by various agencies for diverse purposes. In the River Forecast Centers (RFCs), the PRISM monthly climatology is used to support tasks such as QPE, or quality control of point precipitation observation, and fine tune QPFs. Validation studies by forecasters and researchers have shown that interpolation involving PRISM climatology can effectually reduce the estimation bias for the locations where moderate or little orographic phenomena occur. However, many studies have pointed out limitations in PRISM monthly climatology. These limitations are especially apparent in storm events with fast-moving wet air masses or with storm tracks that are different from climatology. In order to upgrade PRISM climatology so it possesses the capability to characterize the climatology of storm events, it is critical to integrate large-scale atmospheric conditions with the original PRISM predictor variables and to simulate them at a temporal resolution higher than monthly. To this end, a simple, flexible, and powerful framework for precipitation estimation modeling that can be applied to very large data sets is thus developed. In this project, a decision tree based estimation structure was developed to perform the aforementioned variable integration work. Three Atmospheric River events (ARs) were selected to explore the hierarchical relationships among these variables and how these relationships shape the event-based precipitation distribution pattern across California. Several atmospheric variables, including vertically Integrated Vapor Transport (IVT), temperature, zonal wind (u), meridional wind (v), and omega (ω), were added to enhance the sophistication of the tree-based structure in estimating precipitation. To develop a direction-based climatology, the directions the ARs moving over the Pacific Ocean were also calculated and parameterized within the tree estimation structure. The results show that the involvement of the
Wang, Fan; Zhou, Bin; Xu, Jianming; Song, Lishong; Wang, Xin
2009-01-01
Suspended sediments concentration (SSC) in surface water derived from bottom sediment resuspension or discharge of sediment-laden rivers is an important indication of coastal water quality and changes rapidly in high-energy coastal area. Since artificial neural networks (ANN) had been proven successful in modeling a variety of geophysical transfer functions, an ANN model to simulate the relationship between surface water SSC and satellite-received radiances was employed. In situ SSC measurements from the Hangzhou Bay and the Moderate-resolution Imaging Spectroradiometer (MODIS) 250 m daily products were adopted in this study. Significant correlations were observed between in situ measurements and band 1-2 reflectance values of MODIS images, respectively. Results indicated that application of ANN model with one hidden layer appeared to yield superior simulation performance ( r 2 = 0.98; n = 25) compared with regression analysis method. The RMSE for the ANN model was less than 10%, whereas the RMSE for the regression analysis was more than 25%. Results also showed that different tidal situations affect the model simulation results to some extent. The SSC of surface water in Hangzhou Bay is high and changes rapidly due to tidal flood and ebb during a tidal cycle. The combined utilization of Terra and Aqua MODIS data can capture the tidal cycle induced dynamic of surface water SSC. This study demonstrated that MODIS 250 m daily products and ANN model are useful for monitoring surface SSC dynamic within high-energy coastal water environments.
Focazio, Michael J.; Plummer, L. Neil; Bohlke, John K.; Busenberg, Eurybiades; Bachman, L. Joseph; Powars, David S.
1998-01-01
Knowledge of the residence times of the ground-water systems in Chesapeake Bay watershed helps resource managers anticipate potential delays between implementation of land-management practices and any improve-ments in river and estuary water quality. This report presents preliminary estimates of ground-water residence times and apparent ages of water in the shallow aquifers of the Chesapeake Bay watershed. A simple reservoir model, published data, and analyses of spring water were used to estimate residence times and apparent ages of ground-water discharge. Ranges of aquifer hydraulic characteristics throughout the Bay watershed were derived from published literature and were used to estimate ground-water residence times on the basis of a simple reservoir model. Simple combinations of rock type and physiographic province were used to delineate hydrogeomorphic regions (HGMR?s) for the study area. The HGMR?s are used to facilitate organization and display of the data and analyses. Illustrations depicting the relation of aquifer characteristics and associated residence times as a continuum for each HGMR were developed. In this way, the natural variation of aquifer characteristics can be seen graphically by use of data from selected representative studies. Water samples collected in September and November 1996, from 46 springs throughout the watershed were analyzed for chlorofluorocarbons (CFC?s) to estimate the apparent age of ground water. For comparison purposes, apparent ages of water from springs were calculated assuming piston flow. Additi-onal data are given to estimate apparent ages assuming an exponential distribution of ages in spring discharge. Additionally, results from previous studies of CFC-dating of ground water from other springs and wells in the watershed were compiled. The CFC data, and the data on major ions, nutrients, and nitrogen isotopes in the water collected from the 46 springs are included in this report. The apparent ages of water
Casas-Ruiz, M; Ligero, R A; Barbero, L
2012-06-01
In order to investigate the radiological hazard of naturally occurring radioactive material (NORM) and man-made (137)Cs radionuclide in the Bay of Cádiz, 149 samples of sediments have been analysed. Activity concentration in all the samples was determined using a HPGe detection system. Activity concentrations values of (226)Ra, (232)Th, (40)K and (137)Cs in the samples were 12.6±2.6 (2.5-40.6), 18.5±4.0 (2.8-73.4), 451±45 (105-1342) and 3.2±1.3 (0.2-16.0) Bq kg(-1), respectively. Outdoor external dose rate due to natural and man-made radionuclides was calculated to be 35.79±1.69 (4.71-119.16) nGy h(-1) and annual effective dose was estimated to be 43.89±2.27 (5.78-146.14) µSv y(-1). Results showed low levels of radioactivity due to NORM and man-made (137)Cs radionuclide in marine sediments recovered from the Bay of Cádiz (Spain), discarding any significant radiological risks related to human activities of the area. Furthermore, the obtained data set could be used as background levels for future research.
Directory of Open Access Journals (Sweden)
2005-01-01
Full Text Available Empirical relationships to estimate vertical attenuation coefficient of photosynthetically available radiation (KPAR using Secchi disk, vertical black disk, and horizontal sighting ranges for San Quintín Bay, Baja California, were developed. Radiometric PAR profiles were used to calculate KPAR. Vertical (ZD and horizontal (HS sighting ranges were measured with white (Secchi depth or ZSD, HSW and black (ZBD, HSB targets. The empirical power models KPAR = 1.48 ZSD –1.16, KPAR = 0.87 ZBD –1.52, KPAR = 0.54 HSW –0.65 and KPAR = 0.53 HSB –0.92 were developed for the corresponding relationships. The parameters of these models are not significantly different from those of models developed for Punta Banda Estuary, another Baja California lagoon, with the exception of the one for the KPAR-HSW relationship. Also, parameters of the KPAR-ZSD model for San Quintín Bay and Punta Banda Estuary are not significantly different from those developed for coastal waters near Santa Barbara, California. A set of general models is proposed that may apply to coastal water bodies of northwestern Baja California and southern California (KPAR = 1.45 ZSD –1.10, KPAR = 0.92 ZBD –1.45, and KPAR = 0.70 HSB –1.10. While this approach may be universal, more data are needed to explore the variability of the parameters between different water bodies.
Asymptotic accuracy of Bayesian estimation for a single latent variable.
Yamazaki, Keisuke
2015-09-01
In data science and machine learning, hierarchical parametric models, such as mixture models, are often used. They contain two kinds of variables: observable variables, which represent the parts of the data that can be directly measured, and latent variables, which represent the underlying processes that generate the data. Although there has been an increase in research on the estimation accuracy for observable variables, the theoretical analysis of estimating latent variables has not been thoroughly investigated. In a previous study, we determined the accuracy of a Bayes estimation for the joint probability of the latent variables in a dataset, and we proved that the Bayes method is asymptotically more accurate than the maximum-likelihood method. However, the accuracy of the Bayes estimation for a single latent variable remains unknown. In the present paper, we derive the asymptotic expansions of the error functions, which are defined by the Kullback-Leibler divergence, for two types of single-variable estimations when the statistical regularity is satisfied. Our results indicate that the accuracies of the Bayes and maximum-likelihood methods are asymptotically equivalent and clarify that the Bayes method is only advantageous for multivariable estimations.
删失下的指数分布的贝叶斯估计%Bayes Estimator for the Exponential Distribution under Censorship
Institute of Scientific and Technical Information of China (English)
王立春
2006-01-01
Under the type Ⅱ censorship and random censorship, respectively, we show in this paper that the Bayes estimator of the exponential scale parameter with conjugate prior can be shrinkage estimation with the form (^θ)BE =a(^θ) + bEθ, where (^θ) is an unbiased estimator depending on samples and Eθ denotes the expectation of the prior distribution. When the squared loss function is adopted, a + b = 1; if we use the weighted square loss function,thena+b＜1.%本文分别在Ⅱ型删失和随机删失下,表明了共轭先验下的指数分布的刻度参数的贝叶斯估计为具有如下形式的收缩估计(^θ)BE=a(^θ)+bEθ,此处(^θ)为依赖样本θ的一个无偏估计且Eθ表示先验分布的期望.当采用平方损失函数时,a+b=1;如果用加权平方损失函数,则a+b＜1.
Institute of Scientific and Technical Information of China (English)
王晓红; 宋立新
2013-01-01
研究定时截尾数据情形下 Pareto 分布参数θ的 Bayes 估计和可容许性。给出熵损失函数的定义，取损失函数为熵损失函数，通过计算求出定时截尾情形下的熵损失函数，从而给出了 Pareto 分布参数θ的 Bayes 估计的一般形式；在给出先验分布为 Gamma 分布的条件下，计算出参数θ的后验密度，进而得出了参数θ的 Bayes 估计的精确形式，证明了所得到的参数θ的 Bayes 估计的可容许性。%In order to investigate the Bayesian estimation of Pareto distribution parameter on fixed time censoring data, this paper introduces the definition of entropy loss function by taking the loss function as entropy loss function. Through the calculations of above functions, the general form of Bayesian estimation of Pareto distribution parameter is obtained. Under the conditions of prior distribution as Gamma distribution, calculating the posterior density, the exact form of estimation is then given. In addition, the paper proves that the Bayesian estimation is admissible.
Albaina, A.
2015-02-01
In order to investigate the role of predation on eggs and larvae in the recruitment of anchovy (Engraulis encrasicolus), sardine (Sardina pilchardus), sprat (Sprattus sprattus) and 52 macrozooplankton taxa were assayed for anchovy remains in the gut during the 2010 spawning season using a molecular method. This real-time PCR based assay was capable of detecting 0.005. ng of anchovy DNA (roughly 1/100 of a single egg assay) in a reliable way and allowed detecting predation events up to 6. h after ingestion by small zooplankton taxa. A total of 1069 macrozooplankton individuals, 237 sardines and 213 sprats were tested. Both fish species and 32 macrozooplankton taxa showed remains of anchovy DNA within their stomach contents. The two main findings are (1) that the previously neglected macrozooplankton impact in anchovy eggs/larvae mortality is in the same order of magnitude of that due to planktivorous fishes and that, (2) the predation pressure was notably different in the two main spawning centers of Bay of Biscay anchovy. While relatively low mortality rates were recorded at the shelf-break spawning center, a higher predation pressure from both fish and macrozooplankton was exerted at the shelf one.
Parameter Identification by Bayes Decision and Neural Networks
DEFF Research Database (Denmark)
Kulczycki, P.; Schiøler, Henrik
1994-01-01
The problem of parameter identification by Bayes point estimation using neural networks is investigated.......The problem of parameter identification by Bayes point estimation using neural networks is investigated....
Directory of Open Access Journals (Sweden)
Aldo Pacheco Ferreira
2010-12-01
Full Text Available Samples of liver and kidney of Little Blue Heron (Egretta caerulea collected on Sepetiba Bay, Rio de Janeiro, Brazil, were analysed for their copper, zinc, cadmium, lead, chromium and nickel content. Mean concentration levels in liver and kidney (μg.g-1 dry weight were 6.32955 and 6.57136 (Cd; 78.17409 and 96.89409 (Zn; 44.01727 and 65.20864 (Cu; 41.15091 and 39.62318 (Pb; 2.80091 and 4.16455 (Cr; and 9.27182 and 9.91091 (Ni, respectively. Results indicate relatively high trace metal contamination in E. caerulea, showing potential widespread biological and mutagenic adverse effects at trophic levels, and therefore, signalling risk to human health.Amostras de fígado e rim de Garça-azul pequena (Egretta caerulea coletadas na Baía de Sepetiba, Rio de Janeiro, Brasil, foram analisadas quanto às concentrações-traço de cobre, zinco, cádmio, chumbo, cromo e níquel. Os níveis médios de concentração no fígado e no rim (μg.g-1 de peso seco foram 6,32955 e 6,57136 (Cd; 78,17409 e 96,89409 (Zn; 44,01727 e 65,20864 (Cu; 41,15091 e 39,62318 (Pb; 2,80091 e 4,16455 (Cr e 9,27182 e 9,91091 (Ni, respectivamente. Estes resultados indicam contaminação relativamente alta de metais traço em E. caerulea, evidenciando potencial poder de generalização adversa de efeitos biológicos e mutagênicos em níveis tróficos, e, por conseguinte sinalizando risco para a saúde humana.
Watanabe, Sho; Furuichi, Toru; Ishii, Kazuei
This study proposed an estimation method for collectable amount of food waste considering the food waste generator's cooperation ratio ant the amount of food waste generation, and clarified the factors influencing the collectable amount of food waste. In our method, the cooperation ratio was calculated by using the binary logit model which is often used for the traffic multiple choice question. In order to develop a more precise binary logit model, the factors influencing on the cooperation ratio were extracted by a questionnaire survey asking food waste generator's intention, and the preference investigation was then conducted at the second step. As a result, the collectable amount of food waste was estimated to be 72 [t/day] in the Ishikari bay new port area under a condition of current collection system by using our method. In addition, the most critical factor influencing on the collectable amount of food waste was the treatment fee for households, and was the permitted mixture degree of improper materials for retail trade and restaurant businesses
Zhu, L; Carlin, B P
Bayes and empirical Bayes methods have proven effective in smoothing crude maps of disease risk, eliminating the instability of estimates in low-population areas while maintaining overall geographic trends and patterns. Recent work extends these methods to the analysis of areal data which are spatially misaligned, that is, involving variables (typically counts or rates) which are aggregated over differing sets of regional boundaries. The addition of a temporal aspect complicates matters further, since now the misalignment can arise either within a given time point, or across time points (as when the regional boundaries themselves evolve over time). Hierarchical Bayesian methods (implemented via modern Markov chain Monte Carlo computing methods) enable the fitting of such models, but a formal comparison of their fit is hampered by their large size and often improper prior specifications. In this paper, we accomplish this comparison using the deviance information criterion (DIC), a recently proposed generalization of the Akaike information criterion (AIC) designed for complex hierarchical model settings like ours. We investigate the use of the delta method for obtaining an approximate variance estimate for DIC, in order to attach significance to apparent differences between models. We illustrate our approach using a spatially misaligned data set relating a measure of traffic density to paediatric asthma hospitalizations in San Diego County, California.
Perry, Russell W.; Kirsch, Joseph E.; Hendrix, A. Noble
2016-06-17
Resource managers rely on abundance or density metrics derived from beach seine surveys to make vital decisions that affect fish population dynamics and assemblage structure. However, abundance and density metrics may be biased by imperfect capture and lack of geographic closure during sampling. Currently, there is considerable uncertainty about the capture efficiency of juvenile Chinook salmon (Oncorhynchus tshawytscha) by beach seines. Heterogeneity in capture can occur through unrealistic assumptions of closure and from variation in the probability of capture caused by environmental conditions. We evaluated the assumptions of closure and the influence of environmental conditions on capture efficiency and abundance estimates of Chinook salmon from beach seining within the Sacramento–San Joaquin Delta and the San Francisco Bay. Beach seine capture efficiency was measured using a stratified random sampling design combined with open and closed replicate depletion sampling. A total of 56 samples were collected during the spring of 2014. To assess variability in capture probability and the absolute abundance of juvenile Chinook salmon, beach seine capture efficiency data were fitted to the paired depletion design using modified N-mixture models. These models allowed us to explicitly test the closure assumption and estimate environmental effects on the probability of capture. We determined that our updated method allowing for lack of closure between depletion samples drastically outperformed traditional data analysis that assumes closure among replicate samples. The best-fit model (lowest-valued Akaike Information Criterion model) included the probability of fish being available for capture (relaxed closure assumption), capture probability modeled as a function of water velocity and percent coverage of fine sediment, and abundance modeled as a function of sample area, temperature, and water velocity. Given that beach seining is a ubiquitous sampling technique for
Tate, C. G.; Moersch, J.; Jun, I.; Ming, D. W.; Mitrofanov, I.; Litvak, M.; Behar, A.; Boynton, W. V.; Deflores, L.; Drake, D.; Ehresmann, B.; Fedosov, F.; Golovin, D.; Hardgrove, C.; Harshman, K.; Hassler, D. M.; Kozyrev, A. S.; Kuzmin, R.; Lisov, D.; Malakhov, A.; Milliken, R.; Mischna, M.; Mokrousov, M.; Nikiforov, S.; Sanin, A. B.; Starr, R.; Varenikov, A.; Vostrukhin, A.; Zeitlin, C.
2015-12-01
The Dynamic Albedo of Neutrons (DAN) experiment on the Mars Science Laboratory (MSL) rover Curiosity is designed to detect neutrons to determine hydrogen abundance within the subsurface of Mars (Mitrofanov, I.G. et al. [2012]. Space Sci. Rev. 170, 559-582. http://dx.doi.org/10.1007/s11214-012-9924-y; Litvak, M.L. et al. [2008]. Astrobiology 8, 605-613. http://dx.doi.org/10.1089/ast.2007.0157). While DAN has a pulsed neutron generator for active measurements, in passive mode it only measures the leakage spectrum of neutrons produced by the Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) and Galactic Cosmic Rays (GCR). DAN passive measurements provide better spatial coverage than the active measurements because they can be acquired while the rover is moving. Here we compare DAN passive-mode data to models of the instrument's response to compositional differences in a homogeneous regolith in order to estimate the water equivalent hydrogen (WEH) content along the first 200 sols of Curiosity's traverse in Gale Crater, Mars. WEH content is shown to vary greatly along the traverse. These estimates range from 0.5 ± 0.1 wt.% to 3.9 ± 0.2 wt.% for fixed locations (usually overnight stops) investigated by the rover and 0.6 ± 0.2 wt.% to 7.6 ± 1.3 wt.% for areas that the rover has traversed while continuously acquiring DAN passive data between fixed locations. Estimates of WEH abundances at fixed locations based on passive mode data are in broad agreement with those estimated at the same locations using active mode data. Localized (meter-scale) anomalies in estimated WEH values from traverse measurements have no particular surface expression observable in co-located images. However at a much larger scale, the hummocky plains and bedded fractured units are shown to be distinct compositional units based on the hydrogen content derived from DAN passive measurements. DAN passive WEH estimates are also shown to be consistent with geologic models inferred from other
Uncertainty in perception and the Hierarchical Gaussian Filter
Directory of Open Access Journals (Sweden)
Christoph Daniel Mathys
2014-11-01
Full Text Available In its full sense, perception rests on an agent’s model of how its sensory input comes about and the inferences it draws based on this model. These inferences are necessarily uncertain. Here, we illustrate how the hierarchical Gaussian filter (HGF offers a principled and generic way to deal with the several forms that uncertainty in perception takes. The HGF is a recent derivation of one-step update equations from Bayesian principles that rests on a hierarchical generative model of the environment and its (instability. It is computationally highly efficient, allows for online estimates of hidden states, and has found numerous applications to experimental data from human subjects. In this paper, we generalize previous descriptions of the HGF and its account of perceptual uncertainty. First, we explicitly formulate the extension of the HGF’s hierarchy to any number of levels; second, we discuss how various forms of uncertainty are accommodated by the minimization of variational free energy as encoded in the update equations; third, we combine the HGF with decision models and demonstrate the inversion of this combination; finally, we report a simulation study that compared four optimization methods for inverting the HGF/decision model combination at different noise levels. These four methods (Nelder-Mead simplex algorithm, Gaussian process-based global optimization, variational Bayes and Markov chain Monte Carlo sampling all performed well even under considerable noise, with variational Bayes offering the best combination of efficiency and informativeness of inference. Our results demonstrate that the HGF provides a principled, flexible, and efficient - but at the same time intuitive - framework for the resolution of perceptual uncertainty in behaving agents.
Study on headland-bay sandy coast stability in South China coasts
Yu, Ji-Tao; Chen, Zi-Shen
2011-03-01
Headland-bay beach equilibrium planform has been a crucial problem abroad to long-term sandy beach evolution and stabilization, extensively applied to forecast long-term coastal erosion evolvement and the influences of coastal engineering as well as long-term coastal management and protection. However, little concern focuses on this in China. The parabolic relationship is the most widely used empirical relationship for determining the static equilibrium shape of headland-bay beaches. This paper utilizes the relation to predict and classify 31 headland-bay beaches and concludes that these bays cannot achieve the ultimate static equilibrium planform in South China. The empirical bay equation can morphologically estimate beach stabilization state, but it is just a referential predictable means and is difficult to evaluate headland-bay shoreline movements in years and decades. By using Digital Shoreline Analysis System suggested by USGS, the rates of shoreline recession and accretion of these different headland-bay beaches are quantitatively calculated from 1990 to 2000. The conclusions of this paper include that (a) most of these 31 bays maintain relatively stable and the rates of erosion and accretion are relatively large with the impact of man-made constructions on estuarine within these bays from 1990 to 2000; (b) two bays, Haimen Bay and Hailingshan Bay, originally in the quasi-static equilibrium planform determined by the parabolic bay shape equation, have been unstable by the influence of coastal engineering; and (c) these 31 bays have different recession and accretion characters occurring in some bays and some segments. On the one hand, some bays totally exhibit accretion, but some bays show erosion on the whole. Shanwei Bay, Houmen Bay, Pinghai Bay and Yazhou Bay have the similar planforms, characterized by less accretion on the sheltering segment and bigger accretion on the transitional and tangential segments. On the other hand, different segments of some
DEFF Research Database (Denmark)
Thomadsen, Tommy
2005-01-01
of different types of hierarchical networks. This is supplemented by a review of ring network design problems and a presentation of a model allowing for modeling most hierarchical networks. We use methods based on linear programming to design the hierarchical networks. Thus, a brief introduction to the various....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...... linear programming based methods is included. The thesis is thus suitable as a foundation for study of design of hierarchical networks. The major contribution of the thesis consists of seven papers which are included in the appendix. The papers address hierarchical network design and/or ring network...
Xu, Xu Steven; Yuan, Min; Yang, Haitao; Feng, Yan; Xu, Jinfeng; Pinheiro, Jose
2017-01-01
Covariate analysis based on population pharmacokinetics (PPK) is used to identify clinically relevant factors. The likelihood ratio test (LRT) based on nonlinear mixed effect model fits is currently recommended for covariate identification, whereas individual empirical Bayesian estimates (EBEs) are considered unreliable due to the presence of shrinkage. The objectives of this research were to investigate the type I error for LRT and EBE approaches, to confirm the similarity of power between the LRT and EBE approaches from a previous report and to explore the influence of shrinkage on LRT and EBE inferences. Using an oral one-compartment PK model with a single covariate impacting on clearance, we conducted a wide range of simulations according to a two-way factorial design. The results revealed that the EBE-based regression not only provided almost identical power for detecting a covariate effect, but also controlled the false positive rate better than the LRT approach. Shrinkage of EBEs is likely not the root cause for decrease in power or inflated false positive rate although the size of the covariate effect tends to be underestimated at high shrinkage. In summary, contrary to the current recommendations, EBEs may be a better choice for statistical tests in PPK covariate analysis compared to LRT. We proposed a three-step covariate modeling approach for population PK analysis to utilize the advantages of EBEs while overcoming their shortcomings, which allows not only markedly reducing the run time for population PK analysis, but also providing more accurate covariate tests.
Hierarchical Multiagent Reinforcement Learning
2004-01-25
In this paper, we investigate the use of hierarchical reinforcement learning (HRL) to speed up the acquisition of cooperative multiagent tasks. We...introduce a hierarchical multiagent reinforcement learning (RL) framework and propose a hierarchical multiagent RL algorithm called Cooperative HRL. In
Hierarchical modelling for the environmental sciences statistical methods and applications
Clark, James S
2006-01-01
New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.
DEFF Research Database (Denmark)
Thomadsen, Tommy
2005-01-01
Communication networks are immensely important today, since both companies and individuals use numerous services that rely on them. This thesis considers the design of hierarchical (communication) networks. Hierarchical networks consist of layers of networks and are well-suited for coping...... the clusters. The design of hierarchical networks involves clustering of nodes, hub selection, and network design, i.e. selection of links and routing of ows. Hierarchical networks have been in use for decades, but integrated design of these networks has only been considered for very special types of networks....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...
Bouriga, Mathilde; Féron, Olivier; Marin, Jean-Michel; Robert, Christian
2010-01-01
International audience; Ce papier concerne l'estimation de matrices de covariance dans le cas où le nombre de données utilisées pour l'estimation est faible par rapport à la dimension du problème et où les méthodes d'estimation classiques fondées sur le Maximum de Vraisemblance sont peu robustes. Nous proposons une méthode d'estimation non supervisée fondée sur une modélisation bayésienne hiérarchique du problème d'estimation de matrice de covariance : on pose une loi Inverse Wishart a priori...
Kashuba, Roxolana; Cha, YoonKyung; Alameddine, Ibrahim; Lee, Boknam; Cuffney, Thomas F.
2010-01-01
Multilevel hierarchical modeling methodology has been developed for use in ecological data analysis. The effect of urbanization on stream macroinvertebrate communities was measured across a gradient of basins in each of nine metropolitan regions across the conterminous United States. The hierarchical nature of this dataset was harnessed in a multi-tiered model structure, predicting both invertebrate response at the basin scale and differences in invertebrate response at the region scale. Ordination site scores, total taxa richness, Ephemeroptera, Plecoptera, Trichoptera (EPT) taxa richness, and richness-weighted mean tolerance of organisms at a site were used to describe invertebrate responses. Percentage of urban land cover was used as a basin-level predictor variable. Regional mean precipitation, air temperature, and antecedent agriculture were used as region-level predictor variables. Multilevel hierarchical models were fit to both levels of data simultaneously, borrowing statistical strength from the complete dataset to reduce uncertainty in regional coefficient estimates. Additionally, whereas non-hierarchical regressions were only able to show differing relations between invertebrate responses and urban intensity separately for each region, the multilevel hierarchical regressions were able to explain and quantify those differences within a single model. In this way, this modeling approach directly establishes the importance of antecedent agricultural conditions in masking the response of invertebrates to urbanization in metropolitan regions such as Milwaukee-Green Bay, Wisconsin; Denver, Colorado; and Dallas-Fort Worth, Texas. Also, these models show that regions with high precipitation, such as Atlanta, Georgia; Birmingham, Alabama; and Portland, Oregon, start out with better regional background conditions of invertebrates prior to urbanization but experience faster negative rates of change with urbanization. Ultimately, this urbanization
Directory of Open Access Journals (Sweden)
Guillaume Marrelec
Full Text Available The use of mutual information as a similarity measure in agglomerative hierarchical clustering (AHC raises an important issue: some correction needs to be applied for the dimensionality of variables. In this work, we formulate the decision of merging dependent multivariate normal variables in an AHC procedure as a Bayesian model comparison. We found that the Bayesian formulation naturally shrinks the empirical covariance matrix towards a matrix set a priori (e.g., the identity, provides an automated stopping rule, and corrects for dimensionality using a term that scales up the measure as a function of the dimensionality of the variables. Also, the resulting log Bayes factor is asymptotically proportional to the plug-in estimate of mutual information, with an additive correction for dimensionality in agreement with the Bayesian information criterion. We investigated the behavior of these Bayesian alternatives (in exact and asymptotic forms to mutual information on simulated and real data. An encouraging result was first derived on simulations: the hierarchical clustering based on the log Bayes factor outperformed off-the-shelf clustering techniques as well as raw and normalized mutual information in terms of classification accuracy. On a toy example, we found that the Bayesian approaches led to results that were similar to those of mutual information clustering techniques, with the advantage of an automated thresholding. On real functional magnetic resonance imaging (fMRI datasets measuring brain activity, it identified clusters consistent with the established outcome of standard procedures. On this application, normalized mutual information had a highly atypical behavior, in the sense that it systematically favored very large clusters. These initial experiments suggest that the proposed Bayesian alternatives to mutual information are a useful new tool for hierarchical clustering.
Marrelec, Guillaume; Messé, Arnaud; Bellec, Pierre
2015-01-01
The use of mutual information as a similarity measure in agglomerative hierarchical clustering (AHC) raises an important issue: some correction needs to be applied for the dimensionality of variables. In this work, we formulate the decision of merging dependent multivariate normal variables in an AHC procedure as a Bayesian model comparison. We found that the Bayesian formulation naturally shrinks the empirical covariance matrix towards a matrix set a priori (e.g., the identity), provides an automated stopping rule, and corrects for dimensionality using a term that scales up the measure as a function of the dimensionality of the variables. Also, the resulting log Bayes factor is asymptotically proportional to the plug-in estimate of mutual information, with an additive correction for dimensionality in agreement with the Bayesian information criterion. We investigated the behavior of these Bayesian alternatives (in exact and asymptotic forms) to mutual information on simulated and real data. An encouraging result was first derived on simulations: the hierarchical clustering based on the log Bayes factor outperformed off-the-shelf clustering techniques as well as raw and normalized mutual information in terms of classification accuracy. On a toy example, we found that the Bayesian approaches led to results that were similar to those of mutual information clustering techniques, with the advantage of an automated thresholding. On real functional magnetic resonance imaging (fMRI) datasets measuring brain activity, it identified clusters consistent with the established outcome of standard procedures. On this application, normalized mutual information had a highly atypical behavior, in the sense that it systematically favored very large clusters. These initial experiments suggest that the proposed Bayesian alternatives to mutual information are a useful new tool for hierarchical clustering.
Hierarchical Reverberation Mapping
Brewer, Brendon J
2013-01-01
Reverberation mapping (RM) is an important technique in studies of active galactic nuclei (AGN). The key idea of RM is to measure the time lag $\\tau$ between variations in the continuum emission from the accretion disc and subsequent response of the broad line region (BLR). The measurement of $\\tau$ is typically used to estimate the physical size of the BLR and is combined with other measurements to estimate the black hole mass $M_{\\rm BH}$. A major difficulty with RM campaigns is the large amount of data needed to measure $\\tau$. Recently, Fine et al (2012) introduced a new approach to RM where the BLR light curve is sparsely sampled, but this is counteracted by observing a large sample of AGN, rather than a single system. The results are combined to infer properties of the sample of AGN. In this letter we implement this method using a hierarchical Bayesian model and contrast this with the results from the previous stacked cross-correlation technique. We find that our inferences are more precise and allow fo...
Schwarz, L.K.; Runge, M.C.
2009-01-01
Age estimation of individuals is often an integral part of species management research, and a number of ageestimation techniques are commonly employed. Often, the error in these techniques is not quantified or accounted for in other analyses, particularly in growth curve models used to describe physiological responses to environment and human impacts. Also, noninvasive, quick, and inexpensive methods to estimate age are needed. This research aims to provide two Bayesian methods to (i) incorporate age uncertainty into an age-length Schnute growth model and (ii) produce a method from the growth model to estimate age from length. The methods are then employed for Florida manatee (Trichechus manatus) carcasses. After quantifying the uncertainty in the aging technique (counts of ear bone growth layers), we fit age-length data to the Schnute growth model separately by sex and season. Independent prior information about population age structure and the results of the Schnute model are then combined to estimate age from length. Results describing the age-length relationship agree with our understanding of manatee biology. The new methods allow us to estimate age, with quantified uncertainty, for 98% of collected carcasses: 36% from ear bones, 62% from length.
Handley, Lawrence R.; Spear, Kathryn A.; Eleonor Taylor,; Thatcher, Cindy
2011-01-01
The Galveston Bay estuary is located on the upper Texas Gulf coast (Lester and Gonzalez, 2002). It is composed of four major sub-bays—Galveston, Trinity, East, and West Bays. It is Texas’ largest estuary on the Gulf Coast with a total area of 155,399 hectares (384,000 acres) and 1,885 km (1,171 miles) of shoreline (Burgan and Engle, 2006). The volume of the bay has increased over the past 50 years due to subsidence, dredging, and sea level rise. Outside of ship channels, the maximum depth is only 3.7 m (12 ft), with the average depth ranging from 1.2 m (4 ft) to 2.4 m (8 ft)— even shallower in areas with widespread oyster reefs (Lester and Gonzalez, 2002). The tidal range is less than 0.9 m (3 ft), but water levels and circulation are highly influenced by wind. The estuary was formed in a drowned river delta, and its bayous were once channels of the Brazos and Trinity Rivers. Today, the watersheds surrounding the Trinity and San Jacinto Rivers, along with many other smaller bayous, feed into the bay. The entire Galveston Bay watershed is 85,470 km2 (33,000 miles2 ) large (Figure 1). Galveston Island, a 5,000 year old sand bar that lies at the western edge of the bay’s opening into the Gulf of Mexico, impedes the freshwater flow of the Trinity and San Jacinto Rivers into the Gulf, the majority of which comes from the Trinity. The Bolivar Peninsula lies at the eastern edge of the bay’s opening into the Gulf. Water flows into the Gulf at Bolivar Roads, 1 U.S. Geological Survey National Wetlands Research Center, 700 Cajundome Blvd., Lafayette, LA 70506 2 Harte Research Institute for Gulf of Mexico Studies, Texas A&M University - Corpus Christi, 6300 Ocean Drive, Unit 5869, Corpus Christi, Texas 78412 2 Galveston Pass, between Galveston Island and Bolivar Peninsula, and at San Luis Pass, between the western side of Galveston Island and Follets Island.
Institute of Scientific and Technical Information of China (English)
姜岩松; 刘雨; 苏宝库
2011-01-01
In order to eliminate the multicollinearity caused by a g2 model of dual orthogonal accelerometers and increase the accuracy of coefficient identification in the high precision accelerometer calibration test with a gravity field, this paper proposed empirical Bayes ridge estimation, which manipulated the model of dual orthogonal accelerometers. The simulation and experiment show that compared with conventional least squares estimation, empirical Bayes ridge estimation can not only eliminate the influence of the multicollinearity and separate K2, but also has higher identification accuracy.%针对高精度加速度计在重力场的标定试验,为了消除正交双表g2观测模型引起系统复共线性的影响和提高加速度计模型系数的辨识精度,文中提出了一种经验贝叶斯岭估计辨识方法并应用于正交双表误差模型的参数辨识中.从仿真分析和实验结果中看出,与传统的最小二乘法相比,经验贝叶斯岭估计能够消除系统的复共线性,可以分离出二次项系数K2,且辨识精度较高.
Hierarchical modularity in human brain functional networks
Meunier, D; Fornito, A; Ersche, K D; Bullmore, E T; 10.3389/neuro.11.037.2009
2010-01-01
The idea that complex systems have a hierarchical modular organization originates in the early 1960s and has recently attracted fresh support from quantitative studies of large scale, real-life networks. Here we investigate the hierarchical modular (or "modules-within-modules") decomposition of human brain functional networks, measured using functional magnetic resonance imaging (fMRI) in 18 healthy volunteers under no-task or resting conditions. We used a customized template to extract networks with more than 1800 regional nodes, and we applied a fast algorithm to identify nested modular structure at several hierarchical levels. We used mutual information, 0 < I < 1, to estimate the similarity of community structure of networks in different subjects, and to identify the individual network that is most representative of the group. Results show that human brain functional networks have a hierarchical modular organization with a fair degree of similarity between subjects, I=0.63. The largest 5 modules at ...
Hierarchically-coupled hidden Markov models for learning kinetic rates from single-molecule data
van de Meent, Jan-Willem; Wood, Frank; Gonzalez, Ruben L; Wiggins, Chris H
2013-01-01
We address the problem of analyzing sets of noisy time-varying signals that all report on the same process but confound straightforward analyses due to complex inter-signal heterogeneities and measurement artifacts. In particular we consider single-molecule experiments which indirectly measure the distinct steps in a biomolecular process via observations of noisy time-dependent signals such as a fluorescence intensity or bead position. Straightforward hidden Markov model (HMM) analyses attempt to characterize such processes in terms of a set of conformational states, the transitions that can occur between these states, and the associated rates at which those transitions occur; but require ad-hoc post-processing steps to combine multiple signals. Here we develop a hierarchically coupled HMM that allows experimentalists to deal with inter-signal variability in a principled and automatic way. Our approach is a generalized expectation maximization hyperparameter point estimation procedure with variational Bayes a...
Institute of Scientific and Technical Information of China (English)
张玲霞; 师义民
2000-01-01
Consider the two-sided truncation distribution families written in the form f(x｜θ)= ω ( θ1, θ2) h (x) I[θ1,θ2] (x), where θ= ( θ1, θ2 ) T (x) = (t1 (x), t2 (x)) = ( min (x1,…, xm ), max (x1,… ,xm)) is a sufficient statistic, its marginal density is denoted by f(t). In this paper, by estimating f(t), we construct the empirical Bayes estimation (EBE) for parameter-function Q(θ), and prove the EBE is an asymptotically optimal that of Q(θ).%本文考虑一维双边截断型分布族参数函数在平方损失下的经验 Bayes 估计问题．给定θ，x的条件分布为 f(x｜θ)=ω(θ1，θ2)h(x)I[θ1,θ2](x)dx其中 θ=(θ1，θ2) T(x)=(t1(x)，t2(x))=(min(x1，…，xm)，max(x1，…，xm))是充分统计量．其边缘密度为f(t)，本文通过f(t)的核估计构造出θ的函数的经验 Bayes 估计，并证明在一定的条件下是渐近最优的(a．0．)．
Estimation of reliability based on zero-failure data%基于无失效数据的可靠度的估计
Institute of Scientific and Technical Information of China (English)
韩明
2002-01-01
When prior density function of R is in form of π(R |α)∞Rα and 0＜a＜2, the hierarchical Bayes estimation of the product reliability is given under the conditions of the binomial distribution With zero-failure data.%对二项分布无失效数据,在可靠度的先验密度为且时,给出了可靠度的多层Bayes估计.
Otsubo, Koichi; Yamaoka, Kazue; Yokoyama, Tetsuji; Takahashi, Kunihiko; Nishikawa, Masako; Tango, Toshiro
2009-02-01
The standardized mortality ratio (SMR) is frequently used to compare health status among different populations. However, SMR could be biased when based upon communities with small population size such as towns and wards and comparison of SMRs in such cases is not appropriate. The "empirical Bayes estimate of standardized mortality ratio" (EBSMR) is a useful alternative index for comparing mortalities among small populations. The objective of the present study was to use the EBSMR to clarify the relationships between health care resources and mortalities in 3,360 municipalities in Japan. Health care resource data (number of physicians, number of general clinics, number of general sickbeds, and number of emergency hospitals) and socioeconomic factors (population, birth rate, aged households, marital rate, divorce rate, taxable income per individual under taxes duty, unemployment, secondary, tertiary industrial employment and prefecture) were obtained from officially published reports. EBSMRs for all causes, cerebrovascular disease, heart disease, acute myocardial infarction, and malignant neoplasms were calculated from the 1997-2001 vital statistic records. Multiple regression analysis was used to examine the relationships between EBSMRs and the variables representing health care resources and socioeconomic factors as covariates. Some of the variables were log-transformed to normalize the distribution of variables. The correlation between number of physicians and general sickbeds was very high (Pearson's r = 0.776). So, we excluded the number of general sickbeds. Some of the EBSMRs were inversely associated with the number of physicians per person (all causes in males (beta = -0.042, P = 0.024) and females (beta = -0.150, P < 0.001), cerebrovascular disease in females (beta = -0.074, P < 0.001), heart disease in males (beta = -0.066, P < 0.001) and females (beta = - 0.087, P < 0.001), acute myocardial infarction in females (beta = -0.061, P = 0.003), and malignant
Energy Technology Data Exchange (ETDEWEB)
Hanson, K.M.; Cunningham, G.S.
1996-04-01
The authors are developing a computer application, called the Bayes Inference Engine, to provide the means to make inferences about models of physical reality within a Bayesian framework. The construction of complex nonlinear models is achieved by a fully object-oriented design. The models are represented by a data-flow diagram that may be manipulated by the analyst through a graphical programming environment. Maximum a posteriori solutions are achieved using a general, gradient-based optimization algorithm. The application incorporates a new technique of estimating and visualizing the uncertainties in specific aspects of the model.
Multicollinearity in hierarchical linear models.
Yu, Han; Jiang, Shanhe; Land, Kenneth C
2015-09-01
This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model.
Micromechanics of hierarchical materials
DEFF Research Database (Denmark)
Mishnaevsky, Leon, Jr.
2012-01-01
A short overview of micromechanical models of hierarchical materials (hybrid composites, biomaterials, fractal materials, etc.) is given. Several examples of the modeling of strength and damage in hierarchical materials are summarized, among them, 3D FE model of hybrid composites...... with nanoengineered matrix, fiber bundle model of UD composites with hierarchically clustered fibers and 3D multilevel model of wood considered as a gradient, cellular material with layered composite cell walls. The main areas of research in micromechanics of hierarchical materials are identified, among them......, the investigations of the effects of load redistribution between reinforcing elements at different scale levels, of the possibilities to control different material properties and to ensure synergy of strengthening effects at different scale levels and using the nanoreinforcement effects. The main future directions...
Hierarchical auxetic mechanical metamaterials.
Gatt, Ruben; Mizzi, Luke; Azzopardi, Joseph I; Azzopardi, Keith M; Attard, Daphne; Casha, Aaron; Briffa, Joseph; Grima, Joseph N
2015-02-11
Auxetic mechanical metamaterials are engineered systems that exhibit the unusual macroscopic property of a negative Poisson's ratio due to sub-unit structure rather than chemical composition. Although their unique behaviour makes them superior to conventional materials in many practical applications, they are limited in availability. Here, we propose a new class of hierarchical auxetics based on the rotating rigid units mechanism. These systems retain the enhanced properties from having a negative Poisson's ratio with the added benefits of being a hierarchical system. Using simulations on typical hierarchical multi-level rotating squares, we show that, through design, one can control the extent of auxeticity, degree of aperture and size of the different pores in the system. This makes the system more versatile than similar non-hierarchical ones, making them promising candidates for industrial and biomedical applications, such as stents and skin grafts.
Introduction into Hierarchical Matrices
Litvinenko, Alexander
2013-12-05
Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices.
Hierarchical Auxetic Mechanical Metamaterials
Gatt, Ruben; Mizzi, Luke; Azzopardi, Joseph I.; Azzopardi, Keith M.; Attard, Daphne; Casha, Aaron; Briffa, Joseph; Grima, Joseph N.
2015-02-01
Auxetic mechanical metamaterials are engineered systems that exhibit the unusual macroscopic property of a negative Poisson's ratio due to sub-unit structure rather than chemical composition. Although their unique behaviour makes them superior to conventional materials in many practical applications, they are limited in availability. Here, we propose a new class of hierarchical auxetics based on the rotating rigid units mechanism. These systems retain the enhanced properties from having a negative Poisson's ratio with the added benefits of being a hierarchical system. Using simulations on typical hierarchical multi-level rotating squares, we show that, through design, one can control the extent of auxeticity, degree of aperture and size of the different pores in the system. This makes the system more versatile than similar non-hierarchical ones, making them promising candidates for industrial and biomedical applications, such as stents and skin grafts.
Applied Bayesian Hierarchical Methods
Congdon, Peter D
2010-01-01
Bayesian methods facilitate the analysis of complex models and data structures. Emphasizing data applications, alternative modeling specifications, and computer implementation, this book provides a practical overview of methods for Bayesian analysis of hierarchical models.
Programming with Hierarchical Maps
DEFF Research Database (Denmark)
Ørbæk, Peter
This report desribes the hierarchical maps used as a central data structure in the Corundum framework. We describe its most prominent features, ague for its usefulness and briefly describe some of the software prototypes implemented using the technology....
Catalysis with hierarchical zeolites
DEFF Research Database (Denmark)
Holm, Martin Spangsberg; Taarning, Esben; Egeblad, Kresten
2011-01-01
Hierarchical (or mesoporous) zeolites have attracted significant attention during the first decade of the 21st century, and so far this interest continues to increase. There have already been several reviews giving detailed accounts of the developments emphasizing different aspects of this research...... topic. Until now, the main reason for developing hierarchical zeolites has been to achieve heterogeneous catalysts with improved performance but this particular facet has not yet been reviewed in detail. Thus, the present paper summaries and categorizes the catalytic studies utilizing hierarchical...... zeolites that have been reported hitherto. Prototypical examples from some of the different categories of catalytic reactions that have been studied using hierarchical zeolite catalysts are highlighted. This clearly illustrates the different ways that improved performance can be achieved with this family...
Higher-Order Item Response Models for Hierarchical Latent Traits
Huang, Hung-Yu; Wang, Wen-Chung; Chen, Po-Hsi; Su, Chi-Ming
2013-01-01
Many latent traits in the human sciences have a hierarchical structure. This study aimed to develop a new class of higher order item response theory models for hierarchical latent traits that are flexible in accommodating both dichotomous and polytomous items, to estimate both item and person parameters jointly, to allow users to specify…
随机删失下经验贝叶斯估计的渐近最优性%Asymptotical Optimality of Empirical Bayes Estimation Under Random Censorship
Institute of Scientific and Technical Information of China (English)
王立春
2006-01-01
该文运用经验贝叶斯(empirical Bayes(简称EB))方法,在历史样本和当前样本均被另一个具有未知分布的变量随机右删失的条件下,构造了一个指数分布参数的经验贝叶斯估计并获得了它的渐近最优性.文章最后给出了一个例子和模拟结果.
Hierarchical linear regression models for conditional quantiles
Institute of Scientific and Technical Information of China (English)
TIAN Maozai; CHEN Gemai
2006-01-01
The quantile regression has several useful features and therefore is gradually developing into a comprehensive approach to the statistical analysis of linear and nonlinear response models,but it cannot deal effectively with the data with a hierarchical structure.In practice,the existence of such data hierarchies is neither accidental nor ignorable,it is a common phenomenon.To ignore this hierarchical data structure risks overlooking the importance of group effects,and may also render many of the traditional statistical analysis techniques used for studying data relationships invalid.On the other hand,the hierarchical models take a hierarchical data structure into account and have also many applications in statistics,ranging from overdispersion to constructing min-max estimators.However,the hierarchical models are virtually the mean regression,therefore,they cannot be used to characterize the entire conditional distribution of a dependent variable given high-dimensional covariates.Furthermore,the estimated coefficient vector (marginal effects)is sensitive to an outlier observation on the dependent variable.In this article,a new approach,which is based on the Gauss-Seidel iteration and taking a full advantage of the quantile regression and hierarchical models,is developed.On the theoretical front,we also consider the asymptotic properties of the new method,obtaining the simple conditions for an n1/2-convergence and an asymptotic normality.We also illustrate the use of the technique with the real educational data which is hierarchical and how the results can be explained.
Advanced hierarchical distance sampling
Royle, Andy
2016-01-01
In this chapter, we cover a number of important extensions of the basic hierarchical distance-sampling (HDS) framework from Chapter 8. First, we discuss the inclusion of “individual covariates,” such as group size, in the HDS model. This is important in many surveys where animals form natural groups that are the primary observation unit, with the size of the group expected to have some influence on detectability. We also discuss HDS integrated with time-removal and double-observer or capture-recapture sampling. These “combined protocols” can be formulated as HDS models with individual covariates, and thus they have a commonality with HDS models involving group structure (group size being just another individual covariate). We cover several varieties of open-population HDS models that accommodate population dynamics. On one end of the spectrum, we cover models that allow replicate distance sampling surveys within a year, which estimate abundance relative to availability and temporary emigration through time. We consider a robust design version of that model. We then consider models with explicit dynamics based on the Dail and Madsen (2011) model and the work of Sollmann et al. (2015). The final major theme of this chapter is relatively newly developed spatial distance sampling models that accommodate explicit models describing the spatial distribution of individuals known as Point Process models. We provide novel formulations of spatial DS and HDS models in this chapter, including implementations of those models in the unmarked package using a hack of the pcount function for N-mixture models.
Directory of Open Access Journals (Sweden)
Dongsheng Chen
2016-01-01
Full Text Available Accurate biomass estimations are important for assessing and monitoring forest carbon storage. Bayesian theory has been widely applied to tree biomass models. Recently, a hierarchical Bayesian approach has received increasing attention for improving biomass models. In this study, tree biomass data were obtained by sampling 310 trees from 209 permanent sample plots from larch plantations in six regions across China. Non-hierarchical and hierarchical Bayesian approaches were used to model allometric biomass equations. We found that the total, root, stem wood, stem bark, branch and foliage biomass model relationships were statistically significant (p-values < 0.001 for both the non-hierarchical and hierarchical Bayesian approaches, but the hierarchical Bayesian approach increased the goodness-of-fit statistics over the non-hierarchical Bayesian approach. The R2 values of the hierarchical approach were higher than those of the non-hierarchical approach by 0.008, 0.018, 0.020, 0.003, 0.088 and 0.116 for the total tree, root, stem wood, stem bark, branch and foliage models, respectively. The hierarchical Bayesian approach significantly improved the accuracy of the biomass model (except for the stem bark and can reflect regional differences by using random parameters to improve the regional scale model accuracy.
Directory of Open Access Journals (Sweden)
Nguyen Thi Thu Ha
2013-12-01
Full Text Available Sea eutrophication is a natural process of water enrichment caused by increased nutrient loading that severely affects coastal ecosystems by decreasing water quality. The degree of eutrophication can be assessed by chlorophyll-a concentration. This study aims to develop a remote sensing method suitable for estimating chlorophyll-a concentrations in tropical coastal waters with abundant phytoplankton using Moderate Resolution Imaging Spectroradiometer (MODIS/Terra imagery and to improve the spatial resolution of MODIS/Terra-based estimation from 1 km to 100 m by geostatistics. A model based on the ratio of green and blue band reflectance (rGBr is proposed considering the bio-optical property of chlorophyll-a. Tien Yen Bay in northern Vietnam, a typical phytoplankton-rich coastal area, was selected as a case study site. The superiority of rGBr over two existing representative models, based on the blue-green band ratio and the red-near infrared band ratio, was demonstrated by a high correlation of the estimated chlorophyll-a concentrations at 40 sites with values measured in situ. Ordinary kriging was then shown to be highly capable of predicting the concentration for regions of the image covered by clouds and, thus, without sea surface data. Resultant space-time maps of concentrations over a year clarified that Tien Yen Bay is characterized by natural eutrophic waters, because the average of chlorophyll-a concentrations exceeded 10 mg/m3 in the summer. The temporal changes of chlorophyll-a concentrations were consistent with average monthly air temperatures and precipitation. Consequently, a combination of rGBr and ordinary kriging can effectively monitor water quality in tropical shallow waters.
Parallel hierarchical radiosity rendering
Energy Technology Data Exchange (ETDEWEB)
Carter, M.
1993-07-01
In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.
Neutrosophic Hierarchical Clustering Algoritms
Directory of Open Access Journals (Sweden)
Rıdvan Şahin
2014-03-01
Full Text Available Interval neutrosophic set (INS is a generalization of interval valued intuitionistic fuzzy set (IVIFS, whose the membership and non-membership values of elements consist of fuzzy range, while single valued neutrosophic set (SVNS is regarded as extension of intuitionistic fuzzy set (IFS. In this paper, we extend the hierarchical clustering techniques proposed for IFSs and IVIFSs to SVNSs and INSs respectively. Based on the traditional hierarchical clustering procedure, the single valued neutrosophic aggregation operator, and the basic distance measures between SVNSs, we define a single valued neutrosophic hierarchical clustering algorithm for clustering SVNSs. Then we extend the algorithm to classify an interval neutrosophic data. Finally, we present some numerical examples in order to show the effectiveness and availability of the developed clustering algorithms.
A hierarchical Bayes error correction model to explain dynamic effects
D. Fok (Dennis); C. Horváth (Csilla); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)
2004-01-01
textabstractFor promotional planning and market segmentation it is important to understand the short-run and long-run effects of the marketing mix on category and brand sales. In this paper we put forward a sales response model to explain the differences in short-run and long-run effects of promotio
Simultaneous estimation of noise variance and number of peaks in Bayesian spectral deconvolution
Tokuda, Satoru; Okada, Masato
2016-01-01
Heuristic identification of peaks from noisy complex spectra often leads to misunderstanding physical and chemical properties of matter. In this paper, we propose a framework based on Bayesian inference, which enables us to separate multi-peak spectra into single peaks statistically and is constructed in two steps. The first step is estimating both noise variance and number of peaks as hyperparameters based on Bayes free energy, which generally is not analytically tractable. The second step is fitting the parameters of each peak function to the given spectrum by calculating the posterior density, which has a problem of local minima and saddles since multi-peak models are nonlinear and hierarchical. Our framework enables escaping from local minima or saddles by using the exchange Monte Carlo method and calculates Bayes free energy. We discuss a simulation demonstrating how efficient our framework is and show that estimating both noise variance and number of peaks prevents overfitting, overpenalizing, and misun...
Bayes Factors via Savage-Dickey Supermodels
Mootoovaloo, A; Kunz, M
2016-01-01
We outline a new method to compute the Bayes Factor for model selection which bypasses the Bayesian Evidence. Our method combines multiple models into a single, nested, Supermodel using one or more hyperparameters. Since the models are now nested the Bayes Factors between the models can be efficiently computed using the Savage-Dickey Density Ratio (SDDR). In this way model selection becomes a problem of parameter estimation. We consider two ways of constructing the supermodel in detail: one based on combined models, and a second based on combined likelihoods. We report on these two approaches for a Gaussian linear model for which the Bayesian evidence can be calculated analytically and a toy nonlinear problem. Unlike the combined model approach, where a standard Monte Carlo Markov Chain (MCMC) struggles, the combined-likelihood approach fares much better in providing a reliable estimate of the log-Bayes Factor. This scheme potentially opens the way to computationally efficient ways to compute Bayes Factors in...
Weber, Stephanie A; Insaf, Tabassum Z; Hall, Eric S; Talbot, Thomas O; Huff, Amy K
2016-11-01
An enhanced research paradigm is presented to address the spatial and temporal gaps in fine particulate matter (PM2.5) measurements and generate realistic and representative concentration fields for use in epidemiological studies of human exposure to ambient air particulate concentrations. The general approach for research designed to analyze health impacts of exposure to PM2.5 is to use concentration data from the nearest ground-based air quality monitor(s), which typically have missing data on the temporal and spatial scales due to filter sampling schedules and monitor placement, respectively. To circumvent these data gaps, this research project uses a Hierarchical Bayesian Model (HBM) to generate estimates of PM2.5 in areas with and without air quality monitors by combining PM2.5 concentrations measured by monitors, PM2.5 concentration estimates derived from satellite aerosol optical depth (AOD) data, and Community-Multiscale Air Quality (CMAQ) model predictions of PM2.5 concentrations. This methodology represents a substantial step forward in the approach for developing representative PM2.5 concentration datasets to correlate with inpatient hospitalizations and emergency room visits data for asthma and inpatient hospitalizations for myocardial infarction (MI) and heart failure (HF) using case-crossover analysis. There were two key objective of this current study. First was to show that the inputs to the HBM could be expanded to include AOD data in addition to data from PM2.5 monitors and predictions from CMAQ. The second objective was to determine if inclusion of AOD surfaces in HBM model algorithms results in PM2.5 air pollutant concentration surfaces which more accurately predict hospital admittance and emergency room visits for MI, asthma, and HF. This study focuses on the New York City, NY metropolitan and surrounding areas during the 2004-2006 time period, in order to compare the health outcome impacts with those from previous studies and focus on any
Hierarchical Porous Structures
Energy Technology Data Exchange (ETDEWEB)
Grote, Christopher John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-07
Materials Design is often at the forefront of technological innovation. While there has always been a push to generate increasingly low density materials, such as aero or hydrogels, more recently the idea of bicontinuous structures has gone more into play. This review will cover some of the methods and applications for generating both porous, and hierarchically porous structures.
A Bayesian hierarchical model for accident and injury surveillance.
MacNab, Ying C
2003-01-01
This article presents a recent study which applies Bayesian hierarchical methodology to model and analyse accident and injury surveillance data. A hierarchical Poisson random effects spatio-temporal model is introduced and an analysis of inter-regional variations and regional trends in hospitalisations due to motor vehicle accident injuries to boys aged 0-24 in the province of British Columbia, Canada, is presented. The objective of this article is to illustrate how the modelling technique can be implemented as part of an accident and injury surveillance and prevention system where transportation and/or health authorities may routinely examine accidents, injuries, and hospitalisations to target high-risk regions for prevention programs, to evaluate prevention strategies, and to assist in health planning and resource allocation. The innovation of the methodology is its ability to uncover and highlight important underlying structure of the data. Between 1987 and 1996, British Columbia hospital separation registry registered 10,599 motor vehicle traffic injury related hospitalisations among boys aged 0-24 who resided in British Columbia, of which majority (89%) of the injuries occurred to boys aged 15-24. The injuries were aggregated by three age groups (0-4, 5-14, and 15-24), 20 health regions (based of place-of-residence), and 10 calendar years (1987 to 1996) and the corresponding mid-year population estimates were used as 'at risk' population. An empirical Bayes inference technique using penalised quasi-likelihood estimation was implemented to model both rates and counts, with spline smoothing accommodating non-linear temporal effects. The results show that (a) crude rates and ratios at health region level are unstable, (b) the models with spline smoothing enable us to explore possible shapes of injury trends at both the provincial level and the regional level, and (c) the fitted models provide a wealth of information about the patterns (both over space and time
Budget constraints and policies that limit primary data collection have fueled a practice of transferring estimates (or models to generate estimates) of ecological endpoints from sites where primary data exists to sites where little to no primary data were collected. Whereas bene...
Choi, Sungchan; Ryu, In-Chang; Götze, H.-J.; Chae, Y.
2017-01-01
Although an amount of hydrocarbon has been discovered in the West Korea Bay Basin (WKBB), located in the North Korean offshore area, geophysical investigations associated with these hydrocarbon reservoirs are not permitted because of the current geopolitical situation. Interpretation of satellite-derived potential field data can be alternatively used to image the 3-D density distribution in the sedimentary basin associated with hydrocarbon deposits. We interpreted the TRIDENT satellite-derived gravity field data to provide detailed insights into the spatial distribution of sedimentary density structures in the WKBB. We used 3-D forward density modelling for the interpretation that incorporated constraints from existing geological and geophysical information. The gravity data interpretation and the 3-D forward modelling showed that there are two modelled areas in the central subbasin that are characterized by very low density structures, with a maximum density of about 2000 kg m-3, indicating some type of hydrocarbon reservoir. One of the anticipated hydrocarbon reservoirs is located in the southern part of the central subbasin with a volume of about 250 km3 at a depth of about 3000 m in the Cretaceous/Jurassic layer. The other hydrocarbon reservoir should exist in the northern part of the central subbasin, with an average volume of about 300 km3 at a depth of about 2500 m.
Semiparametric Quantile Modelling of Hierarchical Data
Institute of Scientific and Technical Information of China (English)
Mao Zai TIAN; Man Lai TANG; Ping Shing CHAN
2009-01-01
The classic hierarchical linear model formulation provides a considerable flexibility for modelling the random effects structure and a powerful tool for analyzing nested data that arise in various areas such as biology, economics and education. However, it assumes the within-group errors to be independently and identically distributed (i.i.d.) and models at all levels to be linear. Most importantly, traditional hierarchical models (just like other ordinary mean regression methods) cannot characterize the entire conditional distribution of a dependent variable given a set of covariates and fail to yield robust estimators. In this article, we relax the aforementioned and normality assumptions, and develop a so-called Hierarchical Semiparametric Quantile Regression Models in which the within-group errors could be heteroscedastic and models at some levels are allowed to be nonparametric. We present the ideas with a 2-level model. The level-l model is specified as a nonparametric model whereas level-2 model is set as a parametric model. Under the proposed semiparametric setting the vector of partial derivatives of the nonparametric function in level-1 becomes the response variable vector in level 2. The proposed method allows us to model the fixed effects in the innermost level (i.e., level 2) as a function of the covariates instead of a constant effect. We outline some mild regularity conditions required for convergence and asymptotic normality for our estimators. We illustrate our methodology with a real hierarchical data set from a laboratory study and some simulation studies.
Classifying hospitals as mortality outliers: logistic versus hierarchical logistic models.
Alexandrescu, Roxana; Bottle, Alex; Jarman, Brian; Aylin, Paul
2014-05-01
The use of hierarchical logistic regression for provider profiling has been recommended due to the clustering of patients within hospitals, but has some associated difficulties. We assess changes in hospital outlier status based on standard logistic versus hierarchical logistic modelling of mortality. The study population consisted of all patients admitted to acute, non-specialist hospitals in England between 2007 and 2011 with a primary diagnosis of acute myocardial infarction, acute cerebrovascular disease or fracture of neck of femur or a primary procedure of coronary artery bypass graft or repair of abdominal aortic aneurysm. We compared standardised mortality ratios (SMRs) from non-hierarchical models with SMRs from hierarchical models, without and with shrinkage estimates of the predicted probabilities (Model 1 and Model 2). The SMRs from standard logistic and hierarchical models were highly statistically significantly correlated (r > 0.91, p = 0.01). More outliers were recorded in the standard logistic regression than hierarchical modelling only when using shrinkage estimates (Model 2): 21 hospitals (out of a cumulative number of 565 pairs of hospitals under study) changed from a low outlier and 8 hospitals changed from a high outlier based on the logistic regression to a not-an-outlier based on shrinkage estimates. Both standard logistic and hierarchical modelling have identified nearly the same hospitals as mortality outliers. The choice of methodological approach should, however, also consider whether the modelling aim is judgment or improvement, as shrinkage may be more appropriate for the former than the latter.
Study on Headland-Bay Sandy Coast Stability in South China Coasts
Institute of Scientific and Technical Information of China (English)
YU Ji-tao; CHEN Zi-shen
2011-01-01
Headland-bay beach equilibrium planform has been a crucial problem abroad to long-term sandy beach evolution and stabilization,extensively applied to forecast long-term coastal erosion evolvement and the influences of coastal engineering as well as long-term coastal management and protection.However,little concern focuses on this in China.The parabolic relationship is the most widely used empirical relationship for determining the static equilibrium shape of headland-bay beaches.This paper utilizes the relation to predict and classify 31 headland-bay beaches and concludes that these bays cannot achieve the ultimate static equilibrium planform in South China.The empirical bay equation can morphologically estimate beach stabilization state,but it is just a referential predictable means and is difficult to evaluate headland-bay shoreline movements in years and decades.By using Digital Shoreline Analysis System suggested by USGS,the rates of shoreline recession and accretion of these different headland-bay beaches are quantitatively calculated from 1990 to 2000.The conclusions of this paper include that(a)most of these 31 bays maintain relatively stable and the rates of erosion and accretion are relatively large with the impact of man-made constructions on estuarine within these bays from 1990 to 2000;(b)two bays,Haimen Bay and Hailingshan Bay,originally in the quasi-static equilibrium planform determined by the parabolic bay shape equation,have been unstable by the influence of coastal engineering;and(c)these 31 bays have different recession and accretion characters occurring in some bays and some segments.On the one hand,some bays totally exhibit accretion,but some bays show erosion on the whole.Shanwei Bay,Houmen Bay,Pinghai Bay and Yazhou Bay have the similar planfotms,characterized by less accretion on the sheltering segment and bigger accretion on the transitional and tangential segments.On the other hand,different segments of some bays have two dissimilar
Directory of Open Access Journals (Sweden)
Wang Junbai
2010-08-01
Full Text Available Abstract Background To further understand the implementation of hyperparameters re-estimation technique in Bayesian hierarchical model, we added two more prior assumptions over the weight in BayesPI, namely Laplace prior and Cauchy prior, by using the evidence approximation method. In addition, we divided hyperparameter (regularization constants α of the model into multiple distinct classes based on either the structure of the neural networks or the property of the weights. Results The newly implemented BayesPI was tested on both synthetic and real ChIP-based high-throughput datasets to identify the corresponding protein binding energy matrices. The results obtained were encouraging: 1 there was a minor effect on the quality of predictions when prior assumptions over the weights were altered (e.g. the prior probability distributions to the weights and the number of classes to the hyperparameters in BayesPI; 2 however, there was a significant impact on the computational speed when tuning the weight prior in the model: for example, BayesPI with a Laplace weight prior achieved the best performance with regard to both the computational speed and the prediction accuracy. Conclusions From this study, we learned that it is absolutely necessary to try different prior assumptions over the weights in Bayesian hierarchical model to design an efficient learning algorithm, though the quality of the final results may not be associated with such changes. In future, the evidence approximation method can be an alternative to Monte Carlo methods for computational implementation of Bayesian hierarchical model.
Collaborative Hierarchical Sparse Modeling
Sprechmann, Pablo; Sapiro, Guillermo; Eldar, Yonina C
2010-01-01
Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is done by solving an l_1-regularized linear regression problem, usually called Lasso. In this work we first combine the sparsity-inducing property of the Lasso model, at the individual feature level, with the block-sparsity property of the group Lasso model, where sparse groups of features are jointly encoded, obtaining a sparsity pattern hierarchically structured. This results in the hierarchical Lasso, which shows important practical modeling advantages. We then extend this approach to the collaborative case, where a set of simultaneously coded signals share the same sparsity pattern at the higher (group) level but not necessarily at the lower one. Signals then share the same active groups, or classes, but not necessarily the same active set. This is very well suited for applications such as source separation. An efficient optimization procedure, which guarantees convergence to the global opt...
厦门西海域、同安湾入海污染负荷估算研究%Estimation of marine pollution load in West Sea and Tong'an Bay in Xiamen
Institute of Scientific and Technical Information of China (English)
潘灿民; 张珞平; 黄金良; 崔江瑞
2011-01-01
主要利用采用GIS、经验排污系数法和经验模型法,对厦门湾的西海域和同安湾两个海域的陆源、海上水产养殖和大气输入源的入海污染负荷进行估算,得出这两个海域的CODMn、总氮(TN)和总磷(TP)年入海污染通量和入海污染负荷.估算结果显示,厦门湾的入海污染负荷主要来自于陆源污染源,所占比例在75.2%以上,大气沉降污染源次之,所占的比例平均为1.2%～18.4%,而海上水产养殖带来的污染负荷所占的比例最小,平均不超过1%.%In this paper, the estimation methods of GIS, the experimental coefficient of sewage disposal and estimation model were mainly used to estimate the marine pollutants loads in West Sea and Tong' an Bay in Xiamen, including the pollution sources from the land-base, atmospheric deposition and aquaculture, and the annual pollutant fluxes of CODMn, TN, TP in every sea area. The estimation results of pollution fluxes and marine pollution loads of main pollutants ( CODMn, TN, TP) in every sea areas were given. The result showed that the land-based pollution holds were the largest percentage of marine loads in every sea area, over 75.2％. The pollution source from the atmospheric deposition was the second proportion of 1.2％ ～ 18.4％. The pollutant from the aquaculture was the smallest proportion, less than 1％.
Hierarchical manifold learning.
Bhatia, Kanwal K; Rao, Anil; Price, Anthony N; Wolz, Robin; Hajnal, Jo; Rueckert, Daniel
2012-01-01
We present a novel method of hierarchical manifold learning which aims to automatically discover regional variations within images. This involves constructing manifolds in a hierarchy of image patches of increasing granularity, while ensuring consistency between hierarchy levels. We demonstrate its utility in two very different settings: (1) to learn the regional correlations in motion within a sequence of time-resolved images of the thoracic cavity; (2) to find discriminative regions of 3D brain images in the classification of neurodegenerative disease,
Hierarchically Structured Electrospun Fibers
Directory of Open Access Journals (Sweden)
Nicole E. Zander
2013-01-01
Full Text Available Traditional electrospun nanofibers have a myriad of applications ranging from scaffolds for tissue engineering to components of biosensors and energy harvesting devices. The generally smooth one-dimensional structure of the fibers has stood as a limitation to several interesting novel applications. Control of fiber diameter, porosity and collector geometry will be briefly discussed, as will more traditional methods for controlling fiber morphology and fiber mat architecture. The remainder of the review will focus on new techniques to prepare hierarchically structured fibers. Fibers with hierarchical primary structures—including helical, buckled, and beads-on-a-string fibers, as well as fibers with secondary structures, such as nanopores, nanopillars, nanorods, and internally structured fibers and their applications—will be discussed. These new materials with helical/buckled morphology are expected to possess unique optical and mechanical properties with possible applications for negative refractive index materials, highly stretchable/high-tensile-strength materials, and components in microelectromechanical devices. Core-shell type fibers enable a much wider variety of materials to be electrospun and are expected to be widely applied in the sensing, drug delivery/controlled release fields, and in the encapsulation of live cells for biological applications. Materials with a hierarchical secondary structure are expected to provide new superhydrophobic and self-cleaning materials.
Pearce, Dave; Walter, Anton; Lupton, W. F.; Warren-Smith, Rodney F.; Lawden, Mike; McIlwrath, Brian; Peden, J. C. M.; Jenness, Tim; Draper, Peter W.
2015-02-01
The Hierarchical Data System (HDS) is a file-based hierarchical data system designed for the storage of a wide variety of information. It is particularly suited to the storage of large multi-dimensional arrays (with their ancillary data) where efficient access is needed. It is a key component of the Starlink software collection (ascl:1110.012) and is used by the Starlink N-Dimensional Data Format (NDF) library (ascl:1411.023). HDS organizes data into hierarchies, broadly similar to the directory structure of a hierarchical filing system, but contained within a single HDS container file. The structures stored in these files are self-describing and flexible; HDS supports modification and extension of structures previously created, as well as functions such as deletion, copying, and renaming. All information stored in HDS files is portable between the machines on which HDS is implemented. Thus, there are no format conversion problems when moving between machines. HDS can write files in a private binary format (version 4), or be layered on top of HDF5 (version 5).
Hierarchical video summarization
Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.
1998-12-01
We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.
Spatial variations of mercury in sediment of Minamata Bay, Japan.
Tomiyasu, Takashi; Matsuyama, Akito; Eguchi, Tomomi; Fuchigami, Yoko; Oki, Kimihiko; Horvat, Milena; Rajar, Rudi; Akagi, Hirokatsu
2006-09-01
Mercury-contaminated effluent was discharged into Minamata Bay from a chemical plant over a period of approximately 40 years until 1968. In October 1977, the Minamata Bay Pollution Prevention Project was initiated to dispose of sedimentary sludge containing mercury concentrations higher than 25 mg kg(-1). In March 1990, the project was completed. In an effort to estimate current contamination in the bay, the vertical and horizontal distributions of mercury in sediment were investigated. Sediment core samples were collected on June 26, 2002 at 16 locations in Minamata Bay and Fukuro Bay located in the southern part of Minamata Bay. The sediment in Fukuro Bay had not been dredged. The total mercury concentration in surface sediment was 1.4-4.3 mg kg(-1) (2.9+/-0.9 mg kg(-1), n=9) for the dredged area of Minamata Bay and 0.3-4.8 mg kg(-1) (3.6+/-1.6 mg kg(-1), n=4) for Fukuro Bay. In the lower layers of long cores taken from both areas, the total mercury concentration decreased with depth and finally showed relatively uniform low values. These values can be considered to represent the background concentration absent of anthropogenic influence, which was estimated for the study area to be 0.068+/-0.012 mg kg(-1) (n=10). From the surface, the total mercury concentration in Fukuro Bay increased with depth and reached a maximum at 8-14 cm. In Minamata Bay, several centimeters from the surface the total mercury concentration did not change significantly having considerably higher values than the background level. At six stations, the methylmercury concentration was determined. Although the vertical variations were similar to those for total mercury, Fukuro Bay sediment showed a higher concentration of methylmercury than Minamata Bay sediment.
Institute of Scientific and Technical Information of China (English)
韩明
2013-01-01
作者以前提出了一种新的参数估计方法——E-Bayes估计法,对二项分布的可靠度,给出了E-Bayes估计的定义、E-Bayes估计和多层Bayes估计公式,但没有给出E-Bayes估计的性质.该文给出了二项分布可靠度F-Bayes估计的性质.%Previously, the author introduces a new parameter estimation method-E-Bayesian estimation method, to estimate the reliability derived form Binomial distribution, the definition of E-Bayesian estimation of the reliability is provided; moreover, formulas of E-Bayesian estimation and hierarchical Bayesian estimation for the reliability are also provided, but the author did not provide propertiy of E-Bayesian estimation. This paper, properties of E-Bayesian estimation are provided.
Bayes reconstruction of missing teeth
DEFF Research Database (Denmark)
Sporring, Jon; Jensen, Katrine Hommelhoff
2008-01-01
We propose a method for restoring the surface of tooth crowns in a 3D model of a human denture, so that the pose and anatomical features of the tooth will work well for chewing. This is achieved by including information about the position and anatomy of the other teeth in the mouth. Our system...... contains two major parts: A statistical model of a selection of tooth shapes and a reconstruction of missing data. We use a training set consisting of 3D scans of dental cast models obtained with a laser scanner, and we have build a model of the shape variability of the teeth, their neighbors...... regularization of the log-likelihood estimate based on differential geometrical properties of teeth surfaces, and we show general conditions under which this may be considered a Bayes prior.Finally we use Bayes method to propose the reconstruction of missing data, for e.g. finding the most probable shape...
DEFF Research Database (Denmark)
Hede, Mikkel Ulfeldt; Nielsen, Lars; Clemmensen, Lars B
2011-01-01
Estimates of past sea-level variations based on different methods and techniques have been presented in a range of studies, including interpretation of beach ridge characteristics. In Denmark, Holocene beach ridge plains have been formed during the last c. 7700 years, a period characterised by both...... (i.e. sea-level) at the time of deposition. Combining the variations in height of the downlaps (in meters above present mean sea-level) with optically stimulated luminescence dating techniques provides estimates of relative sealevel at specific times....... been chosen as a key-locality in this project, as it is located relatively close to the current 0-isobase of isostatic rebound. GPR reflection data have been acquired with shielded 250 MHz Sensors & software antennae along a number of profile lines across beach ridge and swale structures of the Feddet...
Evaluation of hierarchical temporal memory for a real world application
Melis, Wim J.C.; Chizuwa, Shuhei; Kameyama, Michitaka
2010-01-01
A large number of real world applications, such as user support systems, can still not be performed easily by conventional algorithms in comparison with the human brain. Such intelligence is often implemented, by using probability based systems. This paper focuses on comparing the implementation of a cellular phone intention estimation example on a Bayesian Network and Hierarchical Temporal Memory. It is found that Hierarchical Temporal Memory is a system that requires little effort for desig...
Modeling nitrogen cycling in forested watersheds of Chesapeake Bay
Energy Technology Data Exchange (ETDEWEB)
Hunsaker, C.T.; Garten, C.T.; Mulholland, P.J.
1995-03-01
The Chesapeake Bay Agreement calls for a 40% reduction of controllable phosphorus and nitrogen to the tidal Bay by the year 2000. To accomplish this goal the Chesapeake Bay Program needs accurate estimates of nutrient loadings, including atmospheric deposition, from various land uses. The literature was reviewed on forest nitrogen pools and fluxes, and nitrogen data from research catchments in the Chesapeake Basin were identified. The structure of a nitrogen module for forests is recommended for the Chesapeake Bay Watershed Model along with the possible functional forms for fluxes.
Brain networks for confidence weighting and hierarchical inference during probabilistic learning.
Meyniel, Florent; Dehaene, Stanislas
2017-05-09
Learning is difficult when the world fluctuates randomly and ceaselessly. Classical learning algorithms, such as the delta rule with constant learning rate, are not optimal. Mathematically, the optimal learning rule requires weighting prior knowledge and incoming evidence according to their respective reliabilities. This "confidence weighting" implies the maintenance of an accurate estimate of the reliability of what has been learned. Here, using fMRI and an ideal-observer analysis, we demonstrate that the brain's learning algorithm relies on confidence weighting. While in the fMRI scanner, human adults attempted to learn the transition probabilities underlying an auditory or visual sequence, and reported their confidence in those estimates. They knew that these transition probabilities could change simultaneously at unpredicted moments, and therefore that the learning problem was inherently hierarchical. Subjective confidence reports tightly followed the predictions derived from the ideal observer. In particular, subjects managed to attach distinct levels of confidence to each learned transition probability, as required by Bayes-optimal inference. Distinct brain areas tracked the likelihood of new observations given current predictions, and the confidence in those predictions. Both signals were combined in the right inferior frontal gyrus, where they operated in agreement with the confidence-weighting model. This brain region also presented signatures of a hierarchical process that disentangles distinct sources of uncertainty. Together, our results provide evidence that the sense of confidence is an essential ingredient of probabilistic learning in the human brain, and that the right inferior frontal gyrus hosts a confidence-based statistical learning algorithm for auditory and visual sequences.
Detecting Hierarchical Structure in Networks
DEFF Research Database (Denmark)
Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard;
2012-01-01
a generative Bayesian model that is able to infer whether hierarchies are present or not from a hypothesis space encompassing all types of hierarchical tree structures. For efficient inference we propose a collapsed Gibbs sampling procedure that jointly infers a partition and its hierarchical structure......Many real-world networks exhibit hierarchical organization. Previous models of hierarchies within relational data has focused on binary trees; however, for many networks it is unknown whether there is hierarchical structure, and if there is, a binary tree might not account well for it. We propose....... On synthetic and real data we demonstrate that our model can detect hierarchical structure leading to better link-prediction than competing models. Our model can be used to detect if a network exhibits hierarchical structure, thereby leading to a better comprehension and statistical account the network....
Context updates are hierarchical
Directory of Open Access Journals (Sweden)
Anton Karl Ingason
2016-10-01
Full Text Available This squib studies the order in which elements are added to the shared context of interlocutors in a conversation. It focuses on context updates within one hierarchical structure and argues that structurally higher elements are entered into the context before lower elements, even if the structurally higher elements are pronounced after the lower elements. The crucial data are drawn from a comparison of relative clauses in two head-initial languages, English and Icelandic, and two head-final languages, Korean and Japanese. The findings have consequences for any theory of a dynamic semantics.
Yu, Dingfeng; Zhou, Bin; Fan, Yanguo; Li, Tantan; Liang, Shouzhen; Sun, Xiaoling
2014-11-01
Secchi disk depth (SDD) is an important optical property of water related to water quality and primary production. The traditional sampling method is not only time-consuming and labor-intensive but also limited in terms of temporal and spatial coverage, while remote sensing technology can deal with these limitations. In this study, models estimating SDD have been proposed based on the regression analysis between the HJ-1 satellite CCD image and synchronous in situ water quality measurements. The results illustrate the band ratio model of B3/B1 of CCD could be used to estimate Secchi depth in this region, with the mean relative error (MRE) of 8.6% and root mean square error (RMSE) of 0.1 m, respectively. This model has been applied to one image of HJ-1 satellite CCD, generating water transparency on June 23, 2009, which will be of immense value for environmental monitoring. In addition, SDD was deeper in offshore waters than in inshore waters. River runoffs, hydrodynamic environments, and marine aquaculture are the main factors influencing SDD in this area.
Universality: Accurate Checks in Dyson's Hierarchical Model
Godina, J. J.; Meurice, Y.; Oktay, M. B.
2003-06-01
In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.
Hierarchical Multiclass Decompositions with Application to Authorship Determination
El-Yaniv, Ran
2010-01-01
This paper is mainly concerned with the question of how to decompose multiclass classification problems into binary subproblems. We extend known Jensen-Shannon bounds on the Bayes risk of binary problems to hierarchical multiclass problems and use these bounds to develop a heuristic procedure for constructing hierarchical multiclass decomposition for multinomials. We test our method and compare it to the well known "all-pairs" decomposition. Our tests are performed using a new authorship determination benchmark test of machine learning authors. The new method consistently outperforms the all-pairs decomposition when the number of classes is small and breaks even on larger multiclass problems. Using both methods, the classification accuracy we achieve, using an SVM over a feature set consisting of both high frequency single tokens and high frequency token-pairs, appears to be exceptionally high compared to known results in authorship determination.
Casco Bay lies at the heart of Maine's most populated area. The health of its waters, wetlands, and wildlife depend in large part on the activities of the quarter-million residents who live in its watershed. Less than 30 years ago, portions of Casco Bay were off-limits to recr...
DEFF Research Database (Denmark)
Engholm, Ida
2014-01-01
Celebrated as one of the leading and most valuable brands in the world, eBay has acquired iconic status on par with century-old brands such as Coca-Cola and Disney. The eBay logo is now synonymous with the world’s leading online auction website, and its design is associated with the company...
Institute of Scientific and Technical Information of China (English)
姚丽
2014-01-01
证明了二项分布中未知参数的经典估计（最大似然估计和矩估计），一定存在一个先验分布，使其贝叶斯估计就是该经典估计的结论。%The presence of a prior distribution was proved .For binomial distribution under this prior dis-tribution , the classical estimation and the Bayesian estimation of the unknown parameter are equal .
Data supporting study of Ecosystem Metabolism in Pensacola Bay estuary
U.S. Environmental Protection Agency — These files house the data collected during 2013 in lower Pensacola Bay. The data were used to estimate aquatic primary production and respiration. This dataset is...
Canonical sound speed profile for the central Bay of Bengal
Digital Repository Service at National Institute of Oceanography (India)
Murty, T.V.R.; PrasannaKumar, S.; Somayajulu, Y.K.; Sastry, J.S.; De Figueiredo, R.J.P.
Following Munk's canonical theory, an algorithm has been presented for computing sound channel parameters in the western and southern Bay of Bengal. The estimated canonical sound speed profile using these parameters has been compared with computed...
Fractal Analysis Based on Hierarchical Scaling in Complex Systems
Chen, Yanguang
2016-01-01
A fractal is in essence a hierarchy with cascade structure, which can be described with a set of exponential functions. From these exponential functions, a set of power laws indicative of scaling can be derived. Hierarchy structure and spatial network proved to be associated with one another. This paper is devoted to exploring the theory of fractal analysis of complex systems by means of hierarchical scaling. Two research methods are utilized to make this study, including logic analysis method and empirical analysis method. The main results are as follows. First, a fractal system such as Cantor set is described from the hierarchical angle of view; based on hierarchical structure, three approaches are proposed to estimate fractal dimension. Second, the hierarchical scaling can be generalized to describe multifractals, fractal complementary sets, and self-similar curve such as logarithmic spiral. Third, complex systems such as urban system are demonstrated to be a self-similar hierarchy. The human settlements i...
Hierarchical Ag mesostructures for single particle SERS substrate
Xu, Minwei; Zhang, Yin
2017-01-01
Hierarchical Ag mesostructures with highly rough surface morphology have been synthesized at room temperature through a simple seed-mediated approach. Electron microscopy characterizations indicate that the obtained Ag mesostructures exhibit a textured surface morphology with the flower-like architecture. Moreover, the particle size can be tailored easily in the range of 250-500 nm. For the growth process of the hierarchical Ag mesostructures, it is believed that the self-assembly mechanism is more reasonable rather than the epitaxial overgrowth of Ag seed. The oriented attachment of nanoparticles is revealed during the formation of Ag mesostructures. Single particle surface enhanced Raman spectra (sp-SERS) of crystal violet adsorbed on the hierarchical Ag mesostructures were measured. Results reveal that the hierarchical Ag mesostructures can be highly sensitive sp-SERS substrates with good reproducibility. The average enhancement factors for individual Ag mesostructures are estimated to be about 106.
Hierarchical partial order ranking.
Carlsen, Lars
2008-09-01
Assessing the potential impact on environmental and human health from the production and use of chemicals or from polluted sites involves a multi-criteria evaluation scheme. A priori several parameters are to address, e.g., production tonnage, specific release scenarios, geographical and site-specific factors in addition to various substance dependent parameters. Further socio-economic factors may be taken into consideration. The number of parameters to be included may well appear to be prohibitive for developing a sensible model. The study introduces hierarchical partial order ranking (HPOR) that remedies this problem. By HPOR the original parameters are initially grouped based on their mutual connection and a set of meta-descriptors is derived representing the ranking corresponding to the single groups of descriptors, respectively. A second partial order ranking is carried out based on the meta-descriptors, the final ranking being disclosed though average ranks. An illustrative example on the prioritization of polluted sites is given.
Trees and Hierarchical Structures
Haeseler, Arndt
1990-01-01
The "raison d'etre" of hierarchical dustering theory stems from one basic phe nomenon: This is the notorious non-transitivity of similarity relations. In spite of the fact that very often two objects may be quite similar to a third without being that similar to each other, one still wants to dassify objects according to their similarity. This should be achieved by grouping them into a hierarchy of non-overlapping dusters such that any two objects in ~ne duster appear to be more related to each other than they are to objects outside this duster. In everyday life, as well as in essentially every field of scientific investigation, there is an urge to reduce complexity by recognizing and establishing reasonable das sification schemes. Unfortunately, this is counterbalanced by the experience of seemingly unavoidable deadlocks caused by the existence of sequences of objects, each comparatively similar to the next, but the last rather different from the first.
Hierarchical Affinity Propagation
Givoni, Inmar; Frey, Brendan J
2012-01-01
Affinity propagation is an exemplar-based clustering algorithm that finds a set of data-points that best exemplify the data, and associates each datapoint with one exemplar. We extend affinity propagation in a principled way to solve the hierarchical clustering problem, which arises in a variety of domains including biology, sensor networks and decision making in operational research. We derive an inference algorithm that operates by propagating information up and down the hierarchy, and is efficient despite the high-order potentials required for the graphical model formulation. We demonstrate that our method outperforms greedy techniques that cluster one layer at a time. We show that on an artificial dataset designed to mimic the HIV-strain mutation dynamics, our method outperforms related methods. For real HIV sequences, where the ground truth is not available, we show our method achieves better results, in terms of the underlying objective function, and show the results correspond meaningfully to geographi...
Optimisation by hierarchical search
Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias
2015-03-01
Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.
How hierarchical is language use?
Frank, Stefan L.; Bod, Rens; Christiansen, Morten H.
2012-01-01
It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science. PMID:22977157
How hierarchical is language use?
Frank, Stefan L; Bod, Rens; Christiansen, Morten H
2012-11-22
It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science.
Associative Hierarchical Random Fields.
Ladický, L'ubor; Russell, Chris; Kohli, Pushmeet; Torr, Philip H S
2014-06-01
This paper makes two contributions: the first is the proposal of a new model-The associative hierarchical random field (AHRF), and a novel algorithm for its optimization; the second is the application of this model to the problem of semantic segmentation. Most methods for semantic segmentation are formulated as a labeling problem for variables that might correspond to either pixels or segments such as super-pixels. It is well known that the generation of super pixel segmentations is not unique. This has motivated many researchers to use multiple super pixel segmentations for problems such as semantic segmentation or single view reconstruction. These super-pixels have not yet been combined in a principled manner, this is a difficult problem, as they may overlap, or be nested in such a way that the segmentations form a segmentation tree. Our new hierarchical random field model allows information from all of the multiple segmentations to contribute to a global energy. MAP inference in this model can be performed efficiently using powerful graph cut based move making algorithms. Our framework generalizes much of the previous work based on pixels or segments, and the resulting labelings can be viewed both as a detailed segmentation at the pixel level, or at the other extreme, as a segment selector that pieces together a solution like a jigsaw, selecting the best segments from different segmentations as pieces. We evaluate its performance on some of the most challenging data sets for object class segmentation, and show that this ability to perform inference using multiple overlapping segmentations leads to state-of-the-art results.
Option pricing, Bayes risks and Applications
Yatracos, Yannis G.
2013-01-01
A statistical decision problem is hidden in the core of option pricing. A simple form for the price C of a European call option is obtained via the minimum Bayes risk, R_B, of a 2-parameter estimation problem, thus justifying calling C Bayes (B-)price. The result provides new insight in option pricing, among others obtaining C for some stock-price models using the underlying probability instead of the risk neutral probability and giving R_B an economic interpretation. When logarithmic stock p...
Modeling hierarchical structures - Hierarchical Linear Modeling using MPlus
Jelonek, Magdalena
2006-01-01
The aim of this paper is to present the technique (and its linkage with physics) of overcoming problems connected to modeling social structures, which are typically hierarchical. Hierarchical Linear Models provide a conceptual and statistical mechanism for drawing conclusions regarding the influence of phenomena at different levels of analysis. In the social sciences it is used to analyze many problems such as educational, organizational or market dilemma. This paper introduces the logic of m...
Petrov, Romain G; Boskri, Abdelkarim; Folcher, Jean-Pierre; Lagarde, Stephane; Bresson, Yves; Benkhaldoum, Zouhair; Lazrek, Mohamed; Rakshit, Suvendu
2014-01-01
The limiting magnitude is a key issue for optical interferometry. Pairwise fringe trackers based on the integrated optics concepts used for example in GRAVITY seem limited to about K=10.5 with the 8m Unit Telescopes of the VLTI, and there is a general "common sense" statement that the efficiency of fringe tracking, and hence the sensitivity of optical interferometry, must decrease as the number of apertures increases, at least in the near infrared where we are still limited by detector readout noise. Here we present a Hierarchical Fringe Tracking (HFT) concept with sensitivity at least equal to this of a two apertures fringe trackers. HFT is based of the combination of the apertures in pairs, then in pairs of pairs then in pairs of groups. The key HFT module is a device that behaves like a spatial filter for two telescopes (2TSF) and transmits all or most of the flux of a cophased pair in a single mode beam. We give an example of such an achromatic 2TSF, based on very broadband dispersed fringes analyzed by g...
Tunesi, Luca; Armbruster, Philippe
2004-02-01
The objective of this paper is to demonstrate a suitable hierarchical networking solution to improve capabilities and performances of space systems, with significant recurrent costs saving and more efficient design & manufacturing flows. Classically, a satellite can be split in two functional sub-systems: the platform and the payload complement. The platform is in charge of providing power, attitude & orbit control and up/down-link services, whereas the payload represents the scientific and/or operational instruments/transponders and embodies the objectives of the mission. One major possibility to improve the performance of payloads, by limiting the data return to pertinent information, is to process data on board thanks to a proper implementation of the payload data system. In this way, it is possible to share non-recurring development costs by exploiting a system that can be adopted by the majority of space missions. It is believed that the Modular and Scalable Payload Data System, under development by ESA, provides a suitable solution to fulfil a large range of future mission requirements. The backbone of the system is the standardised high data rate SpaceWire network http://www.ecss.nl/. As complement, a lower speed command and control bus connecting peripherals is required. For instance, at instrument level, there is a need for a "local" low complexity bus, which gives the possibility to command and control sensors and actuators. Moreover, most of the connections at sub-system level are related to discrete signals management or simple telemetry acquisitions, which can easily and efficiently be handled by a local bus. An on-board hierarchical network can therefore be defined by interconnecting high-speed links and local buses. Additionally, it is worth stressing another important aspect of the design process: Agencies and ESA in particular are frequently confronted with a big consortium of geographically spread companies located in different countries, each one
Biscayne Bay Alongshore Epifauna
National Oceanic and Atmospheric Administration, Department of Commerce — Field studies to characterize the alongshore epifauna (shrimp, crabs, echinoderms, and small fishes) along the western shore of southern Biscayne Bay were started in...
National Oceanic and Atmospheric Administration, Department of Commerce — This image represents a 4x4 meter resolution bathymetric surface for Jobos Bay, Puerto Rico (in NAD83 UTM 19 North). The depth values are in meters referenced to the...
Hammond Bay Biological Station
Federal Laboratory Consortium — Hammond Bay Biological Station (HBBS), located near Millersburg, Michigan, is a field station of the USGS Great Lakes Science Center (GLSC). HBBS was established by...
Chesapeake Bay Tributary Strategies
Chesapeake Bay Tributary Strategies were developed by the seven watershed jurisdictions and outlined the river basin-specific implementation activities to reduce nutrient and sediment pollutant loads from point and nonpoint sources.
National Oceanic and Atmospheric Administration, Department of Commerce — This data set consists of 0.5-meter pixel resolution, four band orthoimages covering the Humboldt Bay area. An orthoimage is remotely sensed image data in which...
DEFF Research Database (Denmark)
Engholm, Ida
2014-01-01
Celebrated as one of the leading and most valuable brands in the world, eBay has acquired iconic status on par with century-old brands such as Coca-Cola and Disney. The eBay logo is now synonymous with the world’s leading online auction website, and its design is associated with the company......’s purpose: selling millions of goods, some of which are ‘designer’ items and some of which are considered design icons....
Hierarchical materials: Background and perspectives
DEFF Research Database (Denmark)
2016-01-01
Hierarchical design draws inspiration from analysis of biological materials and has opened new possibilities for enhancing performance and enabling new functionalities and extraordinary properties. With the development of nanotechnology, the necessary technological requirements for the manufactur...
Hierarchical clustering for graph visualization
Clémençon, Stéphan; Rossi, Fabrice; Tran, Viet Chi
2012-01-01
This paper describes a graph visualization methodology based on hierarchical maximal modularity clustering, with interactive and significant coarsening and refining possibilities. An application of this method to HIV epidemic analysis in Cuba is outlined.
Direct hierarchical assembly of nanoparticles
Xu, Ting; Zhao, Yue; Thorkelsson, Kari
2014-07-22
The present invention provides hierarchical assemblies of a block copolymer, a bifunctional linking compound and a nanoparticle. The block copolymers form one micro-domain and the nanoparticles another micro-domain.
Functional annotation of hierarchical modularity.
Directory of Open Access Journals (Sweden)
Kanchana Padmanabhan
Full Text Available In biological networks of molecular interactions in a cell, network motifs that are biologically relevant are also functionally coherent, or form functional modules. These functionally coherent modules combine in a hierarchical manner into larger, less cohesive subsystems, thus revealing one of the essential design principles of system-level cellular organization and function-hierarchical modularity. Arguably, hierarchical modularity has not been explicitly taken into consideration by most, if not all, functional annotation systems. As a result, the existing methods would often fail to assign a statistically significant functional coherence score to biologically relevant molecular machines. We developed a methodology for hierarchical functional annotation. Given the hierarchical taxonomy of functional concepts (e.g., Gene Ontology and the association of individual genes or proteins with these concepts (e.g., GO terms, our method will assign a Hierarchical Modularity Score (HMS to each node in the hierarchy of functional modules; the HMS score and its p-value measure functional coherence of each module in the hierarchy. While existing methods annotate each module with a set of "enriched" functional terms in a bag of genes, our complementary method provides the hierarchical functional annotation of the modules and their hierarchically organized components. A hierarchical organization of functional modules often comes as a bi-product of cluster analysis of gene expression data or protein interaction data. Otherwise, our method will automatically build such a hierarchy by directly incorporating the functional taxonomy information into the hierarchy search process and by allowing multi-functional genes to be part of more than one component in the hierarchy. In addition, its underlying HMS scoring metric ensures that functional specificity of the terms across different levels of the hierarchical taxonomy is properly treated. We have evaluated our
Fan, Wentao; Sallay, Hassen; Bouguila, Nizar
2016-06-09
In this paper, a novel statistical generative model based on hierarchical Pitman-Yor process and generalized Dirichlet distributions (GDs) is presented. The proposed model allows us to perform joint clustering and feature selection thanks to the interesting properties of the GD distribution. We develop an online variational inference algorithm, formulated in terms of the minimization of a Kullback-Leibler divergence, of our resulting model that tackles the problem of learning from high-dimensional examples. This variational Bayes formulation allows simultaneously estimating the parameters, determining the model's complexity, and selecting the appropriate relevant features for the clustering structure. Moreover, the proposed online learning algorithm allows data instances to be processed in a sequential manner, which is critical for large-scale and real-time applications. Experiments conducted using challenging applications, namely, scene recognition and video segmentation, where our approach is viewed as an unsupervised technique for visual learning in high-dimensional spaces, showed that the proposed approach is suitable and promising.
Scale of association: hierarchical linear models and the measurement of ecological systems
Sean M. McMahon; Jeffrey M. Diez
2007-01-01
A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...
Hierarchical architecture of active knits
Abel, Julianna; Luntz, Jonathan; Brei, Diann
2013-12-01
Nature eloquently utilizes hierarchical structures to form the world around us. Applying the hierarchical architecture paradigm to smart materials can provide a basis for a new genre of actuators which produce complex actuation motions. One promising example of cellular architecture—active knits—provides complex three-dimensional distributed actuation motions with expanded operational performance through a hierarchically organized structure. The hierarchical structure arranges a single fiber of active material, such as shape memory alloys (SMAs), into a cellular network of interlacing adjacent loops according to a knitting grid. This paper defines a four-level hierarchical classification of knit structures: the basic knit loop, knit patterns, grid patterns, and restructured grids. Each level of the hierarchy provides increased architectural complexity, resulting in expanded kinematic actuation motions of active knits. The range of kinematic actuation motions are displayed through experimental examples of different SMA active knits. The results from this paper illustrate and classify the ways in which each level of the hierarchical knit architecture leverages the performance of the base smart material to generate unique actuation motions, providing necessary insight to best exploit this new actuation paradigm.
Regulator Loss Functions and Hierarchical Modeling for Safety Decision Making.
Hatfield, Laura A; Baugh, Christine M; Azzone, Vanessa; Normand, Sharon-Lise T
2017-07-01
Regulators must act to protect the public when evidence indicates safety problems with medical devices. This requires complex tradeoffs among risks and benefits, which conventional safety surveillance methods do not incorporate. To combine explicit regulator loss functions with statistical evidence on medical device safety signals to improve decision making. In the Hospital Cost and Utilization Project National Inpatient Sample, we select pediatric inpatient admissions and identify adverse medical device events (AMDEs). We fit hierarchical Bayesian models to the annual hospital-level AMDE rates, accounting for patient and hospital characteristics. These models produce expected AMDE rates (a safety target), against which we compare the observed rates in a test year to compute a safety signal. We specify a set of loss functions that quantify the costs and benefits of each action as a function of the safety signal. We integrate the loss functions over the posterior distribution of the safety signal to obtain the posterior (Bayes) risk; the preferred action has the smallest Bayes risk. Using simulation and an analysis of AMDE data, we compare our minimum-risk decisions to a conventional Z score approach for classifying safety signals. The 2 rules produced different actions for nearly half of hospitals (45%). In the simulation, decisions that minimize Bayes risk outperform Z score-based decisions, even when the loss functions or hierarchical models are misspecified. Our method is sensitive to the choice of loss functions; eliciting quantitative inputs to the loss functions from regulators is challenging. A decision-theoretic approach to acting on safety signals is potentially promising but requires careful specification of loss functions in consultation with subject matter experts.
Torczynski, John R.
2001-02-27
A module bay requires less cleanroom airflow. A shaped gas inlet passage can allow cleanroom air into the module bay with flow velocity preferentially directed toward contaminant rich portions of a processing module in the module bay. Preferential gas flow direction can more efficiently purge contaminants from appropriate portions of the module bay, allowing a reduced cleanroom air flow rate for contaminant removal. A shelf extending from an air inlet slit in one wall of a module bay can direct air flowing therethrough toward contaminant-rich portions of the module bay, such as a junction between a lid and base of a processing module.
A hierarchical linear model for tree height prediction.
Vicente J. Monleon
2003-01-01
Measuring tree height is a time-consuming process. Often, tree diameter is measured and height is estimated from a published regression model. Trees used to develop these models are clustered into stands, but this structure is ignored and independence is assumed. In this study, hierarchical linear models that account explicitly for the clustered structure of the data...
33 CFR 100.124 - Maggie Fischer Memorial Great South Bay Cross Bay Swim, Great South Bay, New York.
2010-07-01
... South Bay Cross Bay Swim, Great South Bay, New York. 100.124 Section 100.124 Navigation and Navigable... NAVIGABLE WATERS § 100.124 Maggie Fischer Memorial Great South Bay Cross Bay Swim, Great South Bay, New York... swimmer or safety craft on the swim event race course bounded by the following points: Starting Point...
Hierarchical Reverberation Mapping
Brendon J. Brewer; Elliott, Tom M.
2013-01-01
Reverberation mapping (RM) is an important technique in studies of active galactic nuclei (AGN). The key idea of RM is to measure the time lag $\\tau$ between variations in the continuum emission from the accretion disc and subsequent response of the broad line region (BLR). The measurement of $\\tau$ is typically used to estimate the physical size of the BLR and is combined with other measurements to estimate the black hole mass $M_{\\rm BH}$. A major difficulty with RM campaigns is the large a...
bayesPop: Probabilistic Population Projections
Directory of Open Access Journals (Sweden)
Hana Ševčíková
2016-12-01
Full Text Available We describe bayesPop, an R package for producing probabilistic population projections for all countries. This uses probabilistic projections of total fertility and life expectancy generated by Bayesian hierarchical models. It produces a sample from the joint posterior predictive distribution of future age- and sex-specific population counts, fertility rates and mortality rates, as well as future numbers of births and deaths. It provides graphical ways of summarizing this information, including trajectory plots and various kinds of probabilistic population pyramids. An expression language is introduced which allows the user to produce the predictive distribution of a wide variety of derived population quantities, such as the median age or the old age dependency ratio. The package produces aggregated projections for sets of countries, such as UN regions or trading blocs. The methodology has been used by the United Nations to produce their most recent official population projections for all countries, published in the World Population Prospects.
A Hierarchical Bayesian Model for Crowd Emotions
Urizar, Oscar J.; Baig, Mirza S.; Barakova, Emilia I.; Regazzoni, Carlo S.; Marcenaro, Lucio; Rauterberg, Matthias
2016-01-01
Estimation of emotions is an essential aspect in developing intelligent systems intended for crowded environments. However, emotion estimation in crowds remains a challenging problem due to the complexity in which human emotions are manifested and the capability of a system to perceive them in such conditions. This paper proposes a hierarchical Bayesian model to learn in unsupervised manner the behavior of individuals and of the crowd as a single entity, and explore the relation between behavior and emotions to infer emotional states. Information about the motion patterns of individuals are described using a self-organizing map, and a hierarchical Bayesian network builds probabilistic models to identify behaviors and infer the emotional state of individuals and the crowd. This model is trained and tested using data produced from simulated scenarios that resemble real-life environments. The conducted experiments tested the efficiency of our method to learn, detect and associate behaviors with emotional states yielding accuracy levels of 74% for individuals and 81% for the crowd, similar in performance with existing methods for pedestrian behavior detection but with novel concepts regarding the analysis of crowds. PMID:27458366
Directory of Open Access Journals (Sweden)
Kim Seongho
2011-10-01
Full Text Available Abstract Background Mass spectrometry (MS based metabolite profiling has been increasingly popular for scientific and biomedical studies, primarily due to recent technological development such as comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GCxGC/TOF-MS. Nevertheless, the identifications of metabolites from complex samples are subject to errors. Statistical/computational approaches to improve the accuracy of the identifications and false positive estimate are in great need. We propose an empirical Bayes model which accounts for a competing score in addition to the similarity score to tackle this problem. The competition score characterizes the propensity of a candidate metabolite of being matched to some spectrum based on the metabolite's similarity score with other spectra in the library searched against. The competition score allows the model to properly assess the evidence on the presence/absence status of a metabolite based on whether or not the metabolite is matched to some sample spectrum. Results With a mixture of metabolite standards, we demonstrated that our method has better identification accuracy than other four existing methods. Moreover, our method has reliable false discovery rate estimate. We also applied our method to the data collected from the plasma of a rat and identified some metabolites from the plasma under the control of false discovery rate. Conclusions We developed an empirical Bayes model for metabolite identification and validated the method through a mixture of metabolite standards and rat plasma. The results show that our hierarchical model improves identification accuracy as compared with methods that do not structurally model the involved variables. The improvement in identification accuracy is likely to facilitate downstream analysis such as peak alignment and biomarker identification. Raw data and result matrices can be found at http
Hierarchical topic modeling with nested hierarchical Dirichlet process
Institute of Scientific and Technical Information of China (English)
Yi-qun DING; Shan-ping LI; Zhen ZHANG; Bin SHEN
2009-01-01
This paper deals with the statistical modeling of latent topic hierarchies in text corpora. The height of the topic tree is assumed as fixed, while the number of topics on each level as unknown a priori and to be inferred from data. Taking a nonparametric Bayesian approach to this problem, we propose a new probabilistic generative model based on the nested hierarchical Dirichlet process (nHDP) and present a Markov chain Monte Carlo sampling algorithm for the inference of the topic tree structure as welt as the word distribution of each topic and topic distribution of each document. Our theoretical analysis and experiment results show that this model can produce a more compact hierarchical topic structure and captures more free-grained topic relationships compared to the hierarchical latent Dirichlet allocation model.
Hierarchical probabilistic inference of cosmic shear
Schneider, Michael D; Marshall, Philip J; Dawson, William A; Meyers, Joshua; Bard, Deborah J; Lang, Dustin
2014-01-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the glo...
Emergence of a 'visual number sense' in hierarchical generative models.
Stoianov, Ivilin; Zorzi, Marco
2012-01-08
Numerosity estimation is phylogenetically ancient and foundational to human mathematical learning, but its computational bases remain controversial. Here we show that visual numerosity emerges as a statistical property of images in 'deep networks' that learn a hierarchical generative model of the sensory input. Emergent numerosity detectors had response profiles resembling those of monkey parietal neurons and supported numerosity estimation with the same behavioral signature shown by humans and animals.
D'Agostini, G
2005-01-01
It is curious to learn that Enrico Fermi knew how to base probabilistic inference on Bayes theorem, and that some influential notes on statistics for physicists stem from what the author calls elsewhere, but never in these notes, {\\it the Bayes Theorem of Fermi}. The fact is curious because the large majority of living physicists, educated in the second half of last century -- a kind of middle age in the statistical reasoning -- never heard of Bayes theorem during their studies, though they have been constantly using an intuitive reasoning quite Bayesian in spirit. This paper is based on recollections and notes by Jay Orear and on Gauss' ``Theoria motus corporum coelestium'', being the {\\it Princeps mathematicorum} remembered by Orear as source of Fermi's Bayesian reasoning.
Deliberate change without hierarchical influence?
DEFF Research Database (Denmark)
Nørskov, Sladjana; Kesting, Peter; Ulhøi, John Parm
2017-01-01
Purpose This paper aims to present that deliberate change is strongly associated with formal structures and top-down influence. Hierarchical configurations have been used to structure processes, overcome resistance and get things done. But is deliberate change also possible without formal...... reveals that deliberate change is indeed achievable in a non-hierarchical collaborative OSS community context. However, it presupposes the presence and active involvement of informal change agents. The paper identifies and specifies four key drivers for change agents’ influence. Originality....../value The findings contribute to organisational analysis by providing a deeper understanding of the importance of leadership in making deliberate change possible in non-hierarchical settings. It points to the importance of “change-by-conviction”, essentially based on voluntary behaviour. This can open the door...
Static Correctness of Hierarchical Procedures
DEFF Research Database (Denmark)
Schwartzbach, Michael Ignatieff
1990-01-01
A system of hierarchical, fully recursive types in a truly imperative language allows program fragments written for small types to be reused for all larger types. To exploit this property to enable type-safe hierarchical procedures, it is necessary to impose a static requirement on procedure calls....... We introduce an example language and prove the existence of a sound requirement which preserves static correctness while allowing hierarchical procedures. This requirement is further shown to be optimal, in the sense that it imposes as few restrictions as possible. This establishes the theoretical...... basis for a general type hierarchy with static type checking, which enables first-order polymorphism combined with multiple inheritance and specialization in a language with assignments. We extend the results to include opaque types. An opaque version of a type is different from the original but has...
Asquith, W.H.; Mosier, J. G.; Bush, P.W.
1997-01-01
This report presents the results of a study to quantify current (1983–93) mean freshwater inflows to the six bay systems (open water and wetlands) in the Corpus Christi Bay National Estuary Program study area, to test for historical temporal trends in inflows, and to quantify historical and projected changes in inflows. The report also addresses the adequacy of existing data to estimate freshwater inflows.
A hierarchical model for spatial capture-recapture data
Royle, J. Andrew; Young, K.V.
2008-01-01
Estimating density is a fundamental objective of many animal population studies. Application of methods for estimating population size from ostensibly closed populations is widespread, but ineffective for estimating absolute density because most populations are subject to short-term movements or so-called temporary emigration. This phenomenon invalidates the resulting estimates because the effective sample area is unknown. A number of methods involving the adjustment of estimates based on heuristic considerations are in widespread use. In this paper, a hierarchical model of spatially indexed capture recapture data is proposed for sampling based on area searches of spatial sample units subject to uniform sampling intensity. The hierarchical model contains explicit models for the distribution of individuals and their movements, in addition to an observation model that is conditional on the location of individuals during sampling. Bayesian analysis of the hierarchical model is achieved by the use of data augmentation, which allows for a straightforward implementation in the freely available software WinBUGS. We present results of a simulation study that was carried out to evaluate the operating characteristics of the Bayesian estimator under variable densities and movement patterns of individuals. An application of the model is presented for survey data on the flat-tailed horned lizard (Phrynosoma mcallii) in Arizona, USA.
Hierarchical models and the analysis of bird survey information
Sauer, J.R.; Link, W.A.
2003-01-01
Management of birds often requires analysis of collections of estimates. We describe a hierarchical modeling approach to the analysis of these data, in which parameters associated with the individual species estimates are treated as random variables, and probability statements are made about the species parameters conditioned on the data. A Markov-Chain Monte Carlo (MCMC) procedure is used to fit the hierarchical model. This approach is computer intensive, and is based upon simulation. MCMC allows for estimation both of parameters and of derived statistics. To illustrate the application of this method, we use the case in which we are interested in attributes of a collection of estimates of population change. Using data for 28 species of grassland-breeding birds from the North American Breeding Bird Survey, we estimate the number of species with increasing populations, provide precision-adjusted rankings of species trends, and describe a measure of population stability as the probability that the trend for a species is within a certain interval. Hierarchical models can be applied to a variety of bird survey applications, and we are investigating their use in estimation of population change from survey data.
Structural integrity of hierarchical composites
Directory of Open Access Journals (Sweden)
Marco Paggi
2012-01-01
Full Text Available Interface mechanical problems are of paramount importance in engineering and materials science. Traditionally, due to the complexity of modelling their mechanical behaviour, interfaces are often treated as defects and their features are not explored. In this study, a different approach is illustrated, where the interfaces play an active role in the design of innovative hierarchical composites and are fundamental for their structural integrity. Numerical examples regarding cutting tools made of hierarchical cellular polycrystalline materials are proposed, showing that tailoring of interface properties at the different scales is the way to achieve superior mechanical responses that cannot be obtained using standard materials
Operation of the Bayes Inference Engine
Energy Technology Data Exchange (ETDEWEB)
Hanson, K.M.; Cunningham, G.S.
1998-07-27
The authors have developed a computer application, called the Bayes Inference Engine, to enable one to make inferences about models of a physical object from radiographs taken of it. In the BIE calculational models are represented by a data-flow diagram that can be manipulated by the analyst in a graphical-programming environment. The authors demonstrate the operation of the BIE in terms of examples of two-dimensional tomographic reconstruction including uncertainty estimation.
Multiple comparisons in genetic association studies: a hierarchical modeling approach.
Yi, Nengjun; Xu, Shizhong; Lou, Xiang-Yang; Mallick, Himel
2014-02-01
Multiple comparisons or multiple testing has been viewed as a thorny issue in genetic association studies aiming to detect disease-associated genetic variants from a large number of genotyped variants. We alleviate the problem of multiple comparisons by proposing a hierarchical modeling approach that is fundamentally different from the existing methods. The proposed hierarchical models simultaneously fit as many variables as possible and shrink unimportant effects towards zero. Thus, the hierarchical models yield more efficient estimates of parameters than the traditional methods that analyze genetic variants separately, and also coherently address the multiple comparisons problem due to largely reducing the effective number of genetic effects and the number of statistically "significant" effects. We develop a method for computing the effective number of genetic effects in hierarchical generalized linear models, and propose a new adjustment for multiple comparisons, the hierarchical Bonferroni correction, based on the effective number of genetic effects. Our approach not only increases the power to detect disease-associated variants but also controls the Type I error. We illustrate and evaluate our method with real and simulated data sets from genetic association studies. The method has been implemented in our freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/).
Richards Bay effluent pipeline
CSIR Research Space (South Africa)
Lord, DA
1986-07-01
Full Text Available This report discusses the adequate provision for waste disposal is an essential part of the infrastructure needed in the development of Richards Bay as a deepwater harbour and industrial/metropolitan area. Having considered various options for waste...
A hierarchical model of temporal perception.
Pöppel, E
1997-05-01
Temporal perception comprises subjective phenomena such as simultaneity, successiveness, temporal order, subjective present, temporal continuity and subjective duration. These elementary temporal experiences are hierarchically related to each other. Functional system states with a duration of 30 ms are implemented by neuronal oscillations and they provide a mechanism to define successiveness. These system states are also responsible for the identification of basic events. For a sequential representation of several events time tags are allocated, resulting in an ordinal representation of such events. A mechanism of temporal integration binds successive events into perceptual units of 3 s duration. Such temporal integration, which is automatic and presemantic, is also operative in movement control and other cognitive activities. Because of the omnipresence of this integration mechanism it is used for a pragmatic definition of the subjective present. Temporal continuity is the result of a semantic connection between successive integration intervals. Subjective duration is known to depend on mental load and attentional demand, high load resulting in long time estimates. In the hierarchical model proposed, system states of 30 ms and integration intervals of 3 s, together with a memory store, provide an explanatory neuro-cognitive machinery for differential subjective duration.
Sensory Hierarchical Organization and Reading.
Skapof, Jerome
The purpose of this study was to judge the viability of an operational approach aimed at assessing response styles in reading using the hypothesis of sensory hierarchical organization. A sample of 103 middle-class children from a New York City public school, between the ages of five and seven, took part in a three phase experiment. Phase one…
Memory Stacking in Hierarchical Networks.
Westö, Johan; May, Patrick J C; Tiitinen, Hannu
2016-02-01
Robust representations of sounds with a complex spectrotemporal structure are thought to emerge in hierarchically organized auditory cortex, but the computational advantage of this hierarchy remains unknown. Here, we used computational models to study how such hierarchical structures affect temporal binding in neural networks. We equipped individual units in different types of feedforward networks with local memory mechanisms storing recent inputs and observed how this affected the ability of the networks to process stimuli context dependently. Our findings illustrate that these local memories stack up in hierarchical structures and hence allow network units to exhibit selectivity to spectral sequences longer than the time spans of the local memories. We also illustrate that short-term synaptic plasticity is a potential local memory mechanism within the auditory cortex, and we show that it can bring robustness to context dependence against variation in the temporal rate of stimuli, while introducing nonlinearities to response profiles that are not well captured by standard linear spectrotemporal receptive field models. The results therefore indicate that short-term synaptic plasticity might provide hierarchically structured auditory cortex with computational capabilities important for robust representations of spectrotemporal patterns.
Hierarchical analysis of the quiet Sun magnetism
Ramos, A Asensio
2014-01-01
Standard statistical analysis of the magnetic properties of the quiet Sun rely on simple histograms of quantities inferred from maximum-likelihood estimations. Because of the inherent degeneracies, either intrinsic or induced by the noise, this approach is not optimal and can lead to highly biased results. We carry out a meta-analysis of the magnetism of the quiet Sun from Hinode observations using a hierarchical probabilistic method. This model allows us to infer the statistical properties of the magnetic field vector over the observed field-of-view consistently taking into account the uncertainties in each pixel due to noise and degeneracies. Our results point out that the magnetic fields are very weak, below 275 G with 95% credibility, with a slight preference for horizontal fields, although the distribution is not far from a quasi-isotropic distribution.
Hierarchical Prisoner's Dilemma in Hierarchical Public-Goods Game
Fujimoto, Yuma; Kaneko, Kunihiko
2016-01-01
The dilemma in cooperation is one of the major concerns in game theory. In a public-goods game, each individual pays a cost for cooperation, or to prevent defection, and receives a reward from the collected cost in a group. Thus, defection is beneficial for each individual, while cooperation is beneficial for the group. Now, groups (say, countries) consisting of individual players also play games. To study such a multi-level game, we introduce a hierarchical public-goods (HPG) game in which two groups compete for finite resources by utilizing costs collected from individuals in each group. Analyzing this HPG game, we found a hierarchical prisoner's dilemma, in which groups choose the defection policy (say, armaments) as a Nash strategy to optimize each group's benefit, while cooperation optimizes the total benefit. On the other hand, for each individual within a group, refusing to pay the cost (say, tax) is a Nash strategy, which turns to be a cooperation policy for the group, thus leading to a hierarchical d...
Simultaneous Estimation of Noise Variance and Number of Peaks in Bayesian Spectral Deconvolution
Tokuda, Satoru; Nagata, Kenji; Okada, Masato
2017-02-01
The heuristic identification of peaks from noisy complex spectra often leads to misunderstanding of the physical and chemical properties of matter. In this paper, we propose a framework based on Bayesian inference, which enables us to separate multipeak spectra into single peaks statistically and consists of two steps. The first step is estimating both the noise variance and the number of peaks as hyperparameters based on Bayes free energy, which generally is not analytically tractable. The second step is fitting the parameters of each peak function to the given spectrum by calculating the posterior density, which has a problem of local minima and saddles since multipeak models are nonlinear and hierarchical. Our framework enables the escape from local minima or saddles by using the exchange Monte Carlo method and calculates Bayes free energy via the multiple histogram method. We discuss a simulation demonstrating how efficient our framework is and show that estimating both the noise variance and the number of peaks prevents overfitting, overpenalizing, and misunderstanding the precision of parameter estimation.
Energy Technology Data Exchange (ETDEWEB)
Rodrigo, J. F.; Martinez-Ramos, C.; Barbero, L.; Casas-Ruiz, M.
2011-07-01
Knowledge of radioactivity levels in soils has a double interest: on the one hand, allows you to set the reference values ??(base Linne) from a region or geographic area, and secondly, to evaluate the external radiation dose received by the population and biota, through appropriate dosimetric model. The natural radioactivity, especially the radionuclides in the natural series. The aim of this study is to determine the levels of gamma emitting radionuclides in marine sediments of the Bay of Cadiz, and dose rates from external radiation received in the areas studied. (Author)
Directory of Open Access Journals (Sweden)
Giselle Parno Guimarães
2006-02-01
Full Text Available In the begining of April 2004, concentrations of NHx (NH3 + NH4+ were measured in surface waters of the Guanabara Bay. Concentrations varied from 2 to 143 mmol L-1. Ammonia exchange at the air-sea interface was quantified using a numerical model. No measurement of NH3 concentration in air (c air was performed. Thus, calculations of NH3 flux were based on the assumptions of c air = 1 and 5 µg m-3. Fluxes were predominantly from the water to the atmosphere and varied from -20 to almost 3500 µg N m-2 h-1.
Decision Bayes Criteria for Optimal Classifier Based on Probabilistic Measures
Institute of Scientific and Technical Information of China (English)
Wissal Drira; Faouzi Ghorbel
2014-01-01
This paper addresses the high dimension sample problem in discriminate analysis under nonparametric and supervised assumptions. Since there is a kind of equivalence between the probabilistic dependence measure and the Bayes classification error probability, we propose to use an iterative algorithm to optimize the dimension reduction for classification with a probabilistic approach to achieve the Bayes classifier. The estimated probabilities of different errors encountered along the different phases of the system are realized by the Kernel estimate which is adjusted in a means of the smoothing parameter. Experiment results suggest that the proposed approach performs well.
Hierarchical structure of biological systems
Alcocer-Cuarón, Carlos; Rivera, Ana L; Castaño, Victor M
2014-01-01
A general theory of biological systems, based on few fundamental propositions, allows a generalization of both Wierner and Berthalanffy approaches to theoretical biology. Here, a biological system is defined as a set of self-organized, differentiated elements that interact pair-wise through various networks and media, isolated from other sets by boundaries. Their relation to other systems can be described as a closed loop in a steady-state, which leads to a hierarchical structure and functioning of the biological system. Our thermodynamical approach of hierarchical character can be applied to biological systems of varying sizes through some general principles, based on the exchange of energy information and/or mass from and within the systems. PMID:24145961
Automatic Hierarchical Color Image Classification
Directory of Open Access Journals (Sweden)
Jing Huang
2003-02-01
Full Text Available Organizing images into semantic categories can be extremely useful for content-based image retrieval and image annotation. Grouping images into semantic classes is a difficult problem, however. Image classification attempts to solve this hard problem by using low-level image features. In this paper, we propose a method for hierarchical classification of images via supervised learning. This scheme relies on using a good low-level feature and subsequently performing feature-space reconfiguration using singular value decomposition to reduce noise and dimensionality. We use the training data to obtain a hierarchical classification tree that can be used to categorize new images. Our experimental results suggest that this scheme not only performs better than standard nearest-neighbor techniques, but also has both storage and computational advantages.
Intuitionistic fuzzy hierarchical clustering algorithms
Institute of Scientific and Technical Information of China (English)
Xu Zeshui
2009-01-01
Intuitionistic fuzzy set (IFS) is a set of 2-tuple arguments, each of which is characterized by a mem-bership degree and a nonmembership degree. The generalized form of IFS is interval-valued intuitionistic fuzzy set (IVIFS), whose components are intervals rather than exact numbers. IFSs and IVIFSs have been found to be very useful to describe vagueness and uncertainty. However, it seems that little attention has been focused on the clus-tering analysis of IFSs and IVIFSs. An intuitionistic fuzzy hierarchical algorithm is introduced for clustering IFSs, which is based on the traditional hierarchical clustering procedure, the intuitionistic fuzzy aggregation operator, and the basic distance measures between IFSs: the Hamming distance, normalized Hamming, weighted Hamming, the Euclidean distance, the normalized Euclidean distance, and the weighted Euclidean distance. Subsequently, the algorithm is extended for clustering IVIFSs. Finally the algorithm and its extended form are applied to the classifications of building materials and enterprises respectively.
Hierarchical Formation of Galactic Clusters
Elmegreen, B G
2006-01-01
Young stellar groupings and clusters have hierarchical patterns ranging from flocculent spiral arms and star complexes on the largest scale to OB associations, OB subgroups, small loose groups, clusters and cluster subclumps on the smallest scales. There is no obvious transition in morphology at the cluster boundary, suggesting that clusters are only the inner parts of the hierarchy where stars have had enough time to mix. The power-law cluster mass function follows from this hierarchical structure: n(M_cl) M_cl^-b for b~2. This value of b is independently required by the observation that the summed IMFs from many clusters in a galaxy equals approximately the IMF of each cluster.
Hierarchical matrices algorithms and analysis
Hackbusch, Wolfgang
2015-01-01
This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists ...
Hierarchical Cont-Bouchaud model
Paluch, Robert; Holyst, Janusz A
2015-01-01
We extend the well-known Cont-Bouchaud model to include a hierarchical topology of agent's interactions. The influence of hierarchy on system dynamics is investigated by two models. The first one is based on a multi-level, nested Erdos-Renyi random graph and individual decisions by agents according to Potts dynamics. This approach does not lead to a broad return distribution outside a parameter regime close to the original Cont-Bouchaud model. In the second model we introduce a limited hierarchical Erdos-Renyi graph, where merging of clusters at a level h+1 involves only clusters that have merged at the previous level h and we use the original Cont-Bouchaud agent dynamics on resulting clusters. The second model leads to a heavy-tail distribution of cluster sizes and relative price changes in a wide range of connection densities, not only close to the percolation threshold.
Hierarchical Clustering and Active Galaxies
Hatziminaoglou, E; Manrique, A
2000-01-01
The growth of Super Massive Black Holes and the parallel development of activity in galactic nuclei are implemented in an analytic code of hierarchical clustering. The evolution of the luminosity function of quasars and AGN will be computed with special attention paid to the connection between quasars and Seyfert galaxies. One of the major interests of the model is the parallel study of quasar formation and evolution and the History of Star Formation.
Hybrid and hierarchical composite materials
Kim, Chang-Soo; Sano, Tomoko
2015-01-01
This book addresses a broad spectrum of areas in both hybrid materials and hierarchical composites, including recent development of processing technologies, structural designs, modern computer simulation techniques, and the relationships between the processing-structure-property-performance. Each topic is introduced at length with numerous and detailed examples and over 150 illustrations. In addition, the authors present a method of categorizing these materials, so that representative examples of all material classes are discussed.
Treatment Protocols as Hierarchical Structures
Ben-Bassat, Moshe; Carlson, Richard W.; Puri, Vinod K.; Weil, Max Harry
1978-01-01
We view a treatment protocol as a hierarchical structure of therapeutic modules. The lowest level of this structure consists of individual therapeutic actions. Combinations of individual actions define higher level modules, which we call routines. Routines are designed to manage limited clinical problems, such as the routine for fluid loading to correct hypovolemia. Combinations of routines and additional actions, together with comments, questions, or precautions organized in a branching logic, in turn, define the treatment protocol for a given disorder. Adoption of this modular approach may facilitate the formulation of treatment protocols, since the physician is not required to prepare complex flowcharts. This hierarchical approach also allows protocols to be updated and modified in a flexible manner. By use of such a standard format, individual components may be fitted together to create protocols for multiple disorders. The technique is suited for computer implementation. We believe that this hierarchical approach may facilitate standarization of patient care as well as aid in clinical teaching. A protocol for acute pancreatitis is used to illustrate this technique.
Disturbance observer based hierarchical control of coaxial-rotor UAV.
Mokhtari, M Rida; Cherki, Brahim; Braham, Amal Choukchou
2017-03-01
This paper propose an hierarchical controller based on a new disturbance observer with finite time convergence (FTDO) to solve the path tracking of a small coaxial-rotor-typs Unmanned Aerial Vehicles (UAVs) despite of unknown aerodynamic efforts. The hierarchical control technique is used to separate the flight control problem into an inner loop that controls attitude and an outer loop that controls the thrust force acting on the vehicle. The new disturbance observer with finite time convergence is intergated to online estimate the unknown uncertainties and disturbances and to actively compensate them in finite time.The analysis further extends to the design of a control law that takes the disturbance estimation procedure into account. Numerical simulations are carried out to demonstrate the efficiency of the proposed control strategy.
Hierarchical model-based interferometric synthetic aperture radar image registration
Wang, Yang; Huang, Haifeng; Dong, Zhen; Wu, Manqing
2014-01-01
With the rapid development of spaceborne interferometric synthetic aperture radar technology, classical image registration methods are incompetent for high-efficiency and high-accuracy masses of real data processing. Based on this fact, we propose a new method. This method consists of two steps: coarse registration that is realized by cross-correlation algorithm and fine registration that is realized by hierarchical model-based algorithm. Hierarchical model-based algorithm is a high-efficiency optimization algorithm. The key features of this algorithm are a global model that constrains the overall structure of the motion estimated, a local model that is used in the estimation process, and a coarse-to-fine refinement strategy. Experimental results from different kinds of simulated and real data have confirmed that the proposed method is very fast and has high accuracy. Comparing with a conventional cross-correlation method, the proposed method provides markedly improved performance.
Johno, Hisashi; Nakamoto, Kazunori; Saigo, Tatsuhiko
2015-01-01
Kernel Bayes' rule has been proposed as a nonparametric kernel-based method to realize Bayesian inference in reproducing kernel Hilbert spaces. However, we demonstrate both theoretically and experimentally that the prediction result by kernel Bayes' rule is in some cases unnatural. We consider that this phenomenon is in part due to the fact that the assumptions in kernel Bayes' rule do not hold in general.
California Department of Resources — The Bay Trail provides easily accessible recreational opportunities for outdoor enthusiasts, including hikers, joggers, bicyclists and skaters. It also offers a...
California Department of Resources — The Bay Trail provides easily accessible recreational opportunities for outdoor enthusiasts, including hikers, joggers, bicyclists and skaters. It also offers a...
Directory of Open Access Journals (Sweden)
Qi Yan
2011-01-01
Full Text Available Abstract Background MTML-msBayes uses hierarchical approximate Bayesian computation (HABC under a coalescent model to infer temporal patterns of divergence and gene flow across codistributed taxon-pairs. Under a model of multiple codistributed taxa that diverge into taxon-pairs with subsequent gene flow or isolation, one can estimate hyper-parameters that quantify the mean and variability in divergence times or test models of migration and isolation. The software uses multi-locus DNA sequence data collected from multiple taxon-pairs and allows variation across taxa in demographic parameters as well as heterogeneity in DNA mutation rates across loci. The method also allows a flexible sampling scheme: different numbers of loci of varying length can be sampled from different taxon-pairs. Results Simulation tests reveal increasing power with increasing numbers of loci when attempting to distinguish temporal congruence from incongruence in divergence times across taxon-pairs. These results are robust to DNA mutation rate heterogeneity. Estimating mean divergence times and testing simultaneous divergence was less accurate with migration, but improved if one specified the correct migration model. Simulation validation tests demonstrated that one can detect the correct migration or isolation model with high probability, and that this HABC model testing procedure was greatly improved by incorporating a summary statistic originally developed for this task (Wakeley's ΨW. The method is applied to an empirical data set of three Australian avian taxon-pairs and a result of simultaneous divergence with some subsequent gene flow is inferred. Conclusions To retain flexibility and compatibility with existing bioinformatics tools, MTML-msBayes is a pipeline software package consisting of Perl, C and R programs that are executed via the command line. Source code and binaries are available for download at http://msbayes.sourceforge.net/ under an open source license
Smoak, Joseph M.; Sanders, Christian J.; Patchineelam, Sambasiva R.; Moore, Willard S.
2012-11-01
Radium-226 and 228Ra activities were determined in water samples from within and adjacent to Sepetiba Bay, Rio de Janeiro State (Brazil) in 1998, 2005 and 2007. Surface waters in Sepetiba Bay were substantially higher in 226Ra and 228Ra compared to ocean end member samples. Using the residence time of water in the bay we calculated the flux required to maintain the observed enrichment over the ocean end members. We then applied a radium mass balance to estimate the volume of submarine groundwater discharge (SGD) into the bay. The estimates of SGD into Sepetiba Bay (in 1010 L day-1) were 2.56, 3.75, and 1.0, respectively for 1998, 2005, and 2007. These estimates are equivalent to approximately 1% of the total volume of the bay each day or 50 L m-2 day-1. It is likely that a substantial portion of the SGD in Sepetiba Bay consists of infiltrated seawater. This large flux of SGD has the potential to supply substantial quantities of nutrients, carbon and metals into coastal waters. The SGD found here is greater than what is typically found in SGD studies along the eastern United States and areas with similar geologic characteristics. Considering there are many coastal areas around the world like Sepetiba Bay, this could revise upward the already important contribution of SGD to coastal as well as oceanic budgets.
Thatcher Bay, Washington, Nearshore Restoration Assessment
Breems, Joel; Wyllie-Echeverria, Sandy; Grossman, Eric E.; Elliott, Joel
2009-01-01
The San Juan Archipelago, located at the confluence of the Puget Sound, the Straits of Juan de Fuca in Washington State, and the Straits of Georgia, British Columbia, Canada, provides essential nearshore habitat for diverse salmonid, forage fish, and bird populations. With 408 miles of coastline, the San Juan Islands provide a significant portion of the available nearshore habitat for the greater Puget Sound and are an essential part of the regional efforts to restore Puget Sound (Puget Sound Shared Strategy 2005). The nearshore areas of the San Juan Islands provide a critical link between the terrestrial and marine environments. For this reason the focus on restoration and conservation of nearshore habitat in the San Juan Islands is of paramount importance. Wood-waste was a common by-product of historical lumber-milling operations. To date, relatively little attention has been given to the impact of historical lumber-milling operations in the San Juan Archipelago. Thatcher Bay, on Blakely Island, located near the east edge of the archipelago, is presented here as a case study on the restoration potential for a wood-waste contaminated nearshore area. Case study components include (1) a brief discussion of the history of milling operations. (2) an estimate of the location and amount of the current distribution of wood-waste at the site, (3) a preliminary examination of the impacts of wood-waste on benthic flora and fauna at the site, and (4) the presentation of several restoration alternatives for the site. The history of milling activity in Thatcher Bay began in 1879 with the construction of a mill in the southeastern part of the bay. Milling activity continued for more than 60 years, until the mill closed in 1942. Currently, the primary evidence of the historical milling operations is the presence of approximately 5,000 yd3 of wood-waste contaminated sediments. The distribution and thickness of residual wood-waste at the site was determined by using sediment
Hierarchical Control for Smart Grids
DEFF Research Database (Denmark)
Trangbæk, K; Bendtsen, Jan Dimon; Stoustrup, Jakob
2011-01-01
This paper deals with hierarchical model predictive control (MPC) of smart grid systems. The design consists of a high level MPC controller, a second level of so-called aggregators, which reduces the computational and communication-related load on the high-level control, and a lower level...... of autonomous consumers. The control system is tasked with balancing electric power production and consumption within the smart grid, and makes active use of the ﬂexibility of a large number of power producing and/or power consuming units. The objective is to accommodate the load variation on the grid, arising...
HIERARCHICAL PROBABILISTIC INFERENCE OF COSMIC SHEAR
Energy Technology Data Exchange (ETDEWEB)
Schneider, Michael D.; Dawson, William A. [Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States); Hogg, David W. [Center for Cosmology and Particle Physics, New York University, New York, NY 10003 (United States); Marshall, Philip J.; Bard, Deborah J. [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); Meyers, Joshua [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 452 Lomita Mall, Stanford, CA 94035 (United States); Lang, Dustin, E-mail: schneider42@llnl.gov [Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213 (United States)
2015-07-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.
Hierarchical Structures in Hypertext Learning Environments
Bezdan, Eniko; Kester, Liesbeth; Kirschner, Paul A.
2011-01-01
Bezdan, E., Kester, L., & Kirschner, P. A. (2011, 9 September). Hierarchical Structures in Hypertext Learning Environments. Presentation for the visit of KU Leuven, Open University, Heerlen, The Netherlands.
Corpus Christi, Nueces, and Aransas Bays
Handley, Lawrence R.; Spear, Kathryn A.; Eleonor Taylor,; Thatcher, Cindy
2015-01-01
and Choke Canyon reservoir. The Corpus Christi Estuary receives approximately 35 percent of the total freshwater inflow of 1,480,178,205 cubic meters (m3 ) (1.2 million acre-feet) in the region; the Aransas Estuary receives about 53 percent. Tidal range is only 0.46 m (1.5 ft) on the Gulf shoreline and 0.15 m (0.5 ft) in Nueces Bay. Strong winds are the primary force behind water circulation in the Coastal Bend estuaries. The estuaries in Coastal Bend provide habitat and nutrition for many species of plants and animals, water purification, protection from storms, recreation and seafood, education, and maritime commerce (Holt, 1998). Coastal marshes comprise 45,729 hectares (113,000 acres), or 11 percent, of the aquatic habitats in Coastal Bend. There are 835 species of plants and 2,340 species of animals, including nearly 500 species of birds, in the Coastal Bend Bays area, not including those species that remain unidentified. Nineteen of these species are threatened or endangered. The only remaining natural population of the endangered whooping crane winters at Aransas National Wildlife Refuge. As the population of the Coastal Bend region grows, the amount of use and stress on the estuaries increases. Approximately 3 percent of Texas’ population—560,000 people in 2000—live in Coastal Bend (Holt, 1998; Handley and others, 2007). Corpus Christi, whose population was estimated at over 316,000 in 2013 (U.S. Census Bureau, 2010), is the only city in Coastal Bend with a population greater than 20,000. 3 Agriculture, oil and gas production, manufacturing and shipping, national defense, and tourism dominate the economy in the Coastal Bend Bay area. Petroleum and chemical industries generate the most revenue, whereas tourism and military activities provide the most jobs. Oil production generates over $300 million, and gas production generates nearly $700 million each year. Approximately 25 percent of the jobs held in the Coastal Bend are related to tourism. Livestock
Bayes Multiple Decision Functions
Wu, Wensong
2011-01-01
This paper deals with the problem of simultaneously making many (M) binary decisions based on one realization of a random data matrix X. M is typically large and X will usually have M rows associated with each of the M decisions to make, but for each row the data may be low dimensional. A Bayesian decision-theoretic approach for this problem is implemented with the overall loss function being a cost-weighted linear combination of Type I and Type II loss functions. The class of loss functions considered allows for the use of the false discovery rate (FDR), false nondiscovery rate (FNR), and missed discovery rate (MDR) in assessing the decision. Through this Bayesian paradigm, the Bayes multiple decision function (BMDF) is derived and an efficient algorithm to obtain the optimal Bayes action is described. In contrast to many works in the literature where the rows of the matrix X are assumed to be stochastically independent, we allow in this paper a dependent data structure with the associations obtained through...
2010-05-28
... Swim, Great South Bay, NY, in the Federal Register (74 FR 32428). We did not receive any comments or... published at 74 FR 32428 on July 8, 2009, is adopted as a final rule with the following changes: PART 100... South Bay Cross Bay Swim, Great South Bay, NY AGENCY: Coast Guard, DHS. ACTION: Final rule. SUMMARY:...
Dynamic Organization of Hierarchical Memories.
Kurikawa, Tomoki; Kaneko, Kunihiko
2016-01-01
In the brain, external objects are categorized in a hierarchical way. Although it is widely accepted that objects are represented as static attractors in neural state space, this view does not take account interaction between intrinsic neural dynamics and external input, which is essential to understand how neural system responds to inputs. Indeed, structured spontaneous neural activity without external inputs is known to exist, and its relationship with evoked activities is discussed. Then, how categorical representation is embedded into the spontaneous and evoked activities has to be uncovered. To address this question, we studied bifurcation process with increasing input after hierarchically clustered associative memories are learned. We found a "dynamic categorization"; neural activity without input wanders globally over the state space including all memories. Then with the increase of input strength, diffuse representation of higher category exhibits transitions to focused ones specific to each object. The hierarchy of memories is embedded in the transition probability from one memory to another during the spontaneous dynamics. With increased input strength, neural activity wanders over a narrower state space including a smaller set of memories, showing more specific category or memory corresponding to the applied input. Moreover, such coarse-to-fine transitions are also observed temporally during transient process under constant input, which agrees with experimental findings in the temporal cortex. These results suggest the hierarchy emerging through interaction with an external input underlies hierarchy during transient process, as well as in the spontaneous activity.
A Hierarchical Linear Model with Factor Analysis Structure at Level 2
Miyazaki, Yasuo; Frank, Kenneth A.
2006-01-01
In this article the authors develop a model that employs a factor analysis structure at Level 2 of a two-level hierarchical linear model (HLM). The model (HLM2F) imposes a structure on a deficient rank Level 2 covariance matrix [tau], and facilitates estimation of a relatively large [tau] matrix. Maximum likelihood estimators are derived via the…
Microcontaminants and reproductive impairment of the Forster's tern on Green Bay, Lake Michigan,1983
Kubiak, T.J.; Harris, H.J.; Smith, L.M.; Schwartz, T.R.; Stalling, D.L.; Trick, J.A.; Sileo, L.; Docherty, D.E.; Erdman, T.C.
1989-01-01
For the 1983 nesting season, Forster's tern (Sterna forsteri) reproductive success was significantly impaired on organochlorine contaminated Green Bay, Lake Michigan compared to a relatively uncontaminated inland location at Lake Poygan, Wisconsin. Compared with tern eggs from Lake Poygan, eggs from Green Bay had significantly higher median concentrations of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD), other polychlorinated dibenzo-p-dioxins (PCDDs), total polychlorinated biphenyls (PCBs), total (three congeners) non-ortho, ortho' PCBs, five individual PCB congeners known to induce aryl hydrocarbon hydroxylase (AHH) and several other organochlorine contaminants. Conversions of analytical concentrations of TCDD and PCB congeners based on relative AHH induction potencies allowed for estimation of total 2,3,7,8-TCDD equivalents. Two PCB congeners, 2,3,3',4,4'- and 3,3',4,4',5-pentachlorobiphenyl (PeCB) accounted for more than 90% of the median estimated TCDD equivalents at both Green Bay and Lake Poygan. The median estimated TCDD equivalents were almost 11-fold higher in tern eggs from Green Bay than in eggs from Lake Poygan (2175 and 201 pg/g). The hatching success of Green Bay sibling eggs from nests where eggs were collected for contaminant analyses was 75% lower at Green Bay than at Lake Poygan. Hatchability of eggs taken from other nests and artificially incubated was about 50% lower for Green Bay than for Lake Poygan. Among hatchlings from laboratory incubation, those from Green Bay weighed approximately 20% less and had a mean liver weight to body weight ratio 26% greater than those from Lake Poygan. In both field and laboratory, mean minimum incubation periods were significantly longer for eggs from Green Bay compared to Lake Poygan (8.25 and 4.58 days, respectively). Mean minimum incubation time for Green Bay eggs in the field was 4.37 days longer than in the laboratory. Hatchability was greatly improved when Green Bay eggs were incubated by Lake Poygan adults
A review of circulation and mixing studies of San Francisco Bay, California
Smith, Lawrence H.
1987-01-01
influenced by delta discharge, and South Bay, a tributary estuary which responds to conditions in Central Bay. In the northern reach net circulation is characterized by the river-induced seaward, flow and a resulting gravitational circulation in the channels, and by a tide- and wind-induced net horizontal circulation. A surface layer of relatively fresh water in Central Bay generated by high delta discharges can induce gravitational circulation in South Bay. During low delta discharges South Bay has nearly the same salinity as Central Bay and is characterized by tide- and wind-induced net horizontal circulation. Several factors control the patterns of circulation and mixing in San Francisco Bay. Viewing circulation and mixing over different time-periods and at different geographic scales causes the influences of different factors to be emphasized. The exchange between the bay and coastal ocean and freshwater inflows determine the year-to-year behavior of San Francisco Bay as a freshwater-saltwater mixing zone. Within the bay, exchanges between the embayments control variations over a season. Circulation and mixing patterns within the embayments and the magnitude of river-induced seaward flow influence the between-bay exchanges. The within-bay patterns are in turn determined by tides, winds, and freshwater inflows. Because freshwater inflow is the only factor that can be managed, a major study focus is estimation of inflow-related effects. Most questions relate to the patterns of freshwater inflow necessary to protect valuable resources whose welfare is dependent on conditions in the bay. Among the important questions being addressed are: --What quantity of freshwater inflow is necessary to prevent salt intrusion into the Sacramento-San Joaquin Delta, and what salinity distributions in the bay would result from various inflow patterns? --What quantity of freshwater inflow is sufficient to flush pollutants through the bay? Knowledge of circul
Hierarchical self-organization of non-cooperating individuals
Nepusz, Tamás
2013-01-01
Hierarchy is one of the most conspicuous features of numerous natural, technological and social systems. The underlying structures are typically complex and their most relevant organizational principle is the ordering of the ties among the units they are made of according to a network displaying hierarchical features. In spite of the abundant presence of hierarchy no quantitative theoretical interpretation of the origins of a multi-level, knowledge-based social network exists. Here we introduce an approach which is capable of reproducing the emergence of a multi-levelled network structure based on the plausible assumption that the individuals (representing the nodes of the network) can make the right estimate about the state of their changing environment to a varying degree. Our model accounts for a fundamental feature of knowledge-based organizations: the less capable individuals tend to follow those who are better at solving the problems they all face. We find that relatively simple rules lead to hierarchic...
Hierarchical Resource Allocation in Femtocell Networks using Graph Algorithms
Sadr, Sanam
2012-01-01
This paper presents a hierarchical approach to resource allocation in open-access femtocell networks. The major challenge in femtocell networks is interference management which in our system, based on the Long Term Evolution (LTE) standard, translates to which user should be allocated which physical resource block (or fraction thereof) from which femtocell access point (FAP). The globally optimal solution requires integer programming and is mathematically intractable. We propose a hierarchical three-stage solution: first, the load of each FAP is estimated considering the number of users connected to the FAP, their average channel gain and required data rates. Second, based on each FAP's load, the physical resource blocks (PRBs) are allocated to FAPs in a manner that minimizes the interference by coloring the modified interference graph. Finally, the resource allocation is performed at each FAP considering users' instantaneous channel gain. The two major advantages of this suboptimal approach are the significa...
Chesapeake Bay: Introduction to an Ecosystem.
Environmental Protection Agency, Washington, DC.
The Chesapeake Bay is the largest estuary in the contiguous United States. The Bay and its tidal tributaries make up the Chesapeake Bay ecosystem. This document, which focuses of various aspects of this ecosystem, is divided into four major parts. The first part traces the geologic history of the Bay, describes the overall physical structure of…
Evaluating Bay Area Methane Emission Inventory
Energy Technology Data Exchange (ETDEWEB)
Fischer, Marc [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jeong, Seongeun [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2016-03-01
As a regulatory agency, evaluating and improving estimates of methane (CH4) emissions from the San Francisco Bay Area is an area of interest to the Bay Area Air Quality Management District (BAAQMD). Currently, regional, state, and federal agencies generally estimate methane emissions using bottom-up inventory methods that rely on a combination of activity data, emission factors, biogeochemical models and other information. Recent atmospheric top-down measurement estimates of methane emissions for the US as a whole (e.g., Miller et al., 2013) and in California (e.g., Jeong et al., 2013; Peischl et al., 2013) have shown inventories underestimate total methane emissions by ~ 50% in many areas of California, including the SF Bay Area (Fairley and Fischer, 2015). The goal of this research is to provide information to help improve methane emission estimates for the San Francisco Bay Area. The research effort builds upon our previous work that produced methane emission maps for each of the major source sectors as part of the California Greenhouse Gas Emissions Measurement (CALGEM) project (http://calgem.lbl.gov/prior_emission.html; Jeong et al., 2012; Jeong et al., 2013; Jeong et al., 2014). Working with BAAQMD, we evaluate the existing inventory in light of recently published literature and revise the CALGEM CH4 emission maps to provide better specificity for BAAQMD. We also suggest further research that will improve emission estimates. To accomplish the goals, we reviewed the current BAAQMD inventory, and compared its method with those from the state inventory from the California Air Resources Board (CARB), the CALGEM inventory, and recent published literature. We also updated activity data (e.g., livestock statistics) to reflect recent changes and to better represent spatial information. Then, we produced spatially explicit CH4 emission estimates on the 1-km modeling grid used by BAAQMD. We present the detailed activity data, methods and derived emission maps by sector
Bayes multiple decision functions.
Wu, Wensong; Peña, Edsel A
2013-01-01
This paper deals with the problem of simultaneously making many (M) binary decisions based on one realization of a random data matrix X. M is typically large and X will usually have M rows associated with each of the M decisions to make, but for each row the data may be low dimensional. Such problems arise in many practical areas such as the biological and medical sciences, where the available dataset is from microarrays or other high-throughput technology and with the goal being to decide which among of many genes are relevant with respect to some phenotype of interest; in the engineering and reliability sciences; in astronomy; in education; and in business. A Bayesian decision-theoretic approach to this problem is implemented with the overall loss function being a cost-weighted linear combination of Type I and Type II loss functions. The class of loss functions considered allows for use of the false discovery rate (FDR), false nondiscovery rate (FNR), and missed discovery rate (MDR) in assessing the quality of decision. Through this Bayesian paradigm, the Bayes multiple decision function (BMDF) is derived and an efficient algorithm to obtain the optimal Bayes action is described. In contrast to many works in the literature where the rows of the matrix X are assumed to be stochastically independent, we allow a dependent data structure with the associations obtained through a class of frailty-induced Archimedean copulas. In particular, non-Gaussian dependent data structure, which is typical with failure-time data, can be entertained. The numerical implementation of the determination of the Bayes optimal action is facilitated through sequential Monte Carlo techniques. The theory developed could also be extended to the problem of multiple hypotheses testing, multiple classification and prediction, and high-dimensional variable selection. The proposed procedure is illustrated for the simple versus simple hypotheses setting and for the composite hypotheses setting
Psychometric Properties of IRT Proficiency Estimates
Kolen, Michael J.; Tong, Ye
2010-01-01
Psychometric properties of item response theory proficiency estimates are considered in this paper. Proficiency estimators based on summed scores and pattern scores include non-Bayes maximum likelihood and test characteristic curve estimators and Bayesian estimators. The psychometric properties investigated include reliability, conditional…
Yates, K.K.; Cronin, T. M.; Crane, M.; Hansen, M.; Nayeghandi, A.; Swarzenski, P.; Edgar, T.; Brooks, G.R.; Suthard, B.; Hine, A.; Locker, S.; Willard, D.A.; Hastings, D.; Flower, B.; Hollander, D.; Larson, R.A.; Smith, K.
2007-01-01
Many of the nation's estuaries have been environmentally stressed since the turn of the 20th century and will continue to be impacted in the future. Tampa Bay, one the Gulf of Mexico's largest estuaries, exemplifies the threats that our estuaries face (EPA Report 2001, Tampa Bay Estuary Program-Comprehensive Conservation and Management Plan (TBEP-CCMP)). More than 2 million people live in the Tampa Bay watershed, and the population constitutes to grow. Demand for freshwater resources, conversion of undeveloped areas to resident and industrial uses, increases in storm-water runoff, and increased air pollution from urban and industrial sources are some of the known human activities that impact Tampa Bay. Beginning on 2001, additional anthropogenic modifications began in Tampa Bat including construction of an underwater gas pipeline and a desalinization plant, expansion of existing ports, and increased freshwater withdrawal from three major tributaries to the bay. In January of 2001, the Tampa Bay Estuary Program (TBEP) and its partners identifies a critical need for participation from the U.S. Geological Survey (USGS) in providing multidisciplinary expertise and a regional-scale, integrated science approach to address complex scientific research issue and critical scientific information gaps that are necessary for continued restoration and preservation of Tampa Bay. Tampa Bay stakeholders identified several critical science gaps for which USGS expertise was needed (Yates et al. 2001). These critical science gaps fall under four topical categories (or system components): 1) water and sediment quality, 2) hydrodynamics, 3) geology and geomorphology, and 4) ecosystem structure and function. Scientists and resource managers participating in Tampa Bay studies recognize that it is no longer sufficient to simply examine each of these estuarine system components individually, Rather, the interrelation among system components must be understood to develop conceptual and
Discovering hierarchical structure in normal relational data
DEFF Research Database (Denmark)
Schmidt, Mikkel Nørgaard; Herlau, Tue; Mørup, Morten
2014-01-01
Hierarchical clustering is a widely used tool for structuring and visualizing complex data using similarity. Traditionally, hierarchical clustering is based on local heuristics that do not explicitly provide assessment of the statistical saliency of the extracted hierarchy. We propose a non-param...
Discursive Hierarchical Patterning in Economics Cases
Lung, Jane
2011-01-01
This paper attempts to apply Lung's (2008) model of the discursive hierarchical patterning of cases to a closer and more specific study of Economics cases and proposes a model of the distinct discursive hierarchical patterning of the same. It examines a corpus of 150 Economics cases with a view to uncovering the patterns of discourse construction.…
A Model of Hierarchical Key Assignment Scheme
Institute of Scientific and Technical Information of China (English)
ZHANG Zhigang; ZHAO Jing; XU Maozhi
2006-01-01
A model of the hierarchical key assignment scheme is approached in this paper, which can be used with any cryptography algorithm. Besides, the optimal dynamic control property of a hierarchical key assignment scheme will be defined in this paper. Also, our scheme model will meet this property.
Island Bay Wilderness study area : Island Bay National Wildlife Refuge
US Fish and Wildlife Service, Department of the Interior — This document is a brief report on a wilderness study area located in the Island Bay National Wildlife Refuge. It discusses the history of the study area, its...
Galaxy formation through hierarchical clustering
White, Simon D. M.; Frenk, Carlos S.
1991-01-01
Analytic methods for studying the formation of galaxies by gas condensation within massive dark halos are presented. The present scheme applies to cosmogonies where structure grows through hierarchical clustering of a mixture of gas and dissipationless dark matter. The simplest models consistent with the current understanding of N-body work on dissipationless clustering, and that of numerical and analytic work on gas evolution and cooling are adopted. Standard models for the evolution of the stellar population are also employed, and new models for the way star formation heats and enriches the surrounding gas are constructed. Detailed results are presented for a cold dark matter universe with Omega = 1 and H(0) = 50 km/s/Mpc, but the present methods are applicable to other models. The present luminosity functions contain significantly more faint galaxies than are observed.
Groups possessing extensive hierarchical decompositions
Januszkiewicz, T; Leary, I J
2009-01-01
Kropholler's class of groups is the smallest class of groups which contains all finite groups and is closed under the following operator: whenever $G$ admits a finite-dimensional contractible $G$-CW-complex in which all stabilizer groups are in the class, then $G$ is itself in the class. Kropholler's class admits a hierarchical structure, i.e., a natural filtration indexed by the ordinals. For example, stage 0 of the hierarchy is the class of all finite groups, and stage 1 contains all groups of finite virtual cohomological dimension. We show that for each countable ordinal $\\alpha$, there is a countable group that is in Kropholler's class which does not appear until the $\\alpha+1$st stage of the hierarchy. Previously this was known only for $\\alpha= 0$, 1 and 2. The groups that we construct contain torsion. We also review the construction of a torsion-free group that lies in the third stage of the hierarchy.
Quantum transport through hierarchical structures.
Boettcher, S; Varghese, C; Novotny, M A
2011-04-01
The transport of quantum electrons through hierarchical lattices is of interest because such lattices have some properties of both regular lattices and random systems. We calculate the electron transmission as a function of energy in the tight-binding approximation for two related Hanoi networks. HN3 is a Hanoi network with every site having three bonds. HN5 has additional bonds added to HN3 to make the average number of bonds per site equal to five. We present a renormalization group approach to solve the matrix equation involved in this quantum transport calculation. We observe band gaps in HN3, while no such band gaps are observed in linear networks or in HN5. We provide a detailed scaling analysis near the edges of these band gaps.
Hierarchical networks of scientific journals
Palla, Gergely; Mones, Enys; Pollner, Péter; Vicsek, Tamás
2015-01-01
Scientific journals are the repositories of the gradually accumulating knowledge of mankind about the world surrounding us. Just as our knowledge is organised into classes ranging from major disciplines, subjects and fields to increasingly specific topics, journals can also be categorised into groups using various metrics. In addition to the set of topics characteristic for a journal, they can also be ranked regarding their relevance from the point of overall influence. One widespread measure is impact factor, but in the present paper we intend to reconstruct a much more detailed description by studying the hierarchical relations between the journals based on citation data. We use a measure related to the notion of m-reaching centrality and find a network which shows the level of influence of a journal from the point of the direction and efficiency with which information spreads through the network. We can also obtain an alternative network using a suitably modified nested hierarchy extraction method applied ...
Adaptive Sampling in Hierarchical Simulation
Energy Technology Data Exchange (ETDEWEB)
Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R
2007-07-09
We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.
Hierarchically Nanostructured Materials for Sustainable Environmental Applications
Ren, Zheng; Guo, Yanbing; Liu, Cai-Hong; Gao, Pu-Xian
2013-11-01
This article presents a comprehensive overview of the hierarchical nanostructured materials with either geometry or composition complexity in environmental applications. The hierarchical nanostructures offer advantages of high surface area, synergistic interactions and multiple functionalities towards water remediation, environmental gas sensing and monitoring as well as catalytic gas treatment. Recent advances in synthetic strategies for various hierarchical morphologies such as hollow spheres and urchin-shaped architectures have been reviewed. In addition to the chemical synthesis, the physical mechanisms associated with the materials design and device fabrication have been discussed for each specific application. The development and application of hierarchical complex perovskite oxide nanostructures have also been introduced in photocatalytic water remediation, gas sensing and catalytic converter. Hierarchical nanostructures will open up many possibilities for materials design and device fabrication in environmental chemistry and technology.
A neural signature of hierarchical reinforcement learning.
Ribas-Fernandes, José J F; Solway, Alec; Diuk, Carlos; McGuire, Joseph T; Barto, Andrew G; Niv, Yael; Botvinick, Matthew M
2011-07-28
Human behavior displays hierarchical structure: simple actions cohere into subtask sequences, which work together to accomplish overall task goals. Although the neural substrates of such hierarchy have been the target of increasing research, they remain poorly understood. We propose that the computations supporting hierarchical behavior may relate to those in hierarchical reinforcement learning (HRL), a machine-learning framework that extends reinforcement-learning mechanisms into hierarchical domains. To test this, we leveraged a distinctive prediction arising from HRL. In ordinary reinforcement learning, reward prediction errors are computed when there is an unanticipated change in the prospects for accomplishing overall task goals. HRL entails that prediction errors should also occur in relation to task subgoals. In three neuroimaging studies we observed neural responses consistent with such subgoal-related reward prediction errors, within structures previously implicated in reinforcement learning. The results reported support the relevance of HRL to the neural processes underlying hierarchical behavior.
Hierarchical Identity-Based Lossy Trapdoor Functions
Escala, Alex; Libert, Benoit; Rafols, Carla
2012-01-01
Lossy trapdoor functions, introduced by Peikert and Waters (STOC'08), have received a lot of attention in the last years, because of their wide range of applications in theoretical cryptography. The notion has been recently extended to the identity-based scenario by Bellare et al. (Eurocrypt'12). We provide one more step in this direction, by considering the notion of hierarchical identity-based lossy trapdoor functions (HIB-LTDFs). Hierarchical identity-based cryptography generalizes identitybased cryptography in the sense that identities are organized in a hierarchical way; a parent identity has more power than its descendants, because it can generate valid secret keys for them. Hierarchical identity-based cryptography has been proved very useful both for practical applications and to establish theoretical relations with other cryptographic primitives. In order to realize HIB-LTDFs, we first build a weakly secure hierarchical predicate encryption scheme. This scheme, which may be of independent interest, is...
Hierarchically nanostructured materials for sustainable environmental applications
Ren, Zheng; Guo, Yanbing; Liu, Cai-Hong; Gao, Pu-Xian
2013-01-01
This review presents a comprehensive overview of the hierarchical nanostructured materials with either geometry or composition complexity in environmental applications. The hierarchical nanostructures offer advantages of high surface area, synergistic interactions, and multiple functionalities toward water remediation, biosensing, environmental gas sensing and monitoring as well as catalytic gas treatment. Recent advances in synthetic strategies for various hierarchical morphologies such as hollow spheres and urchin-shaped architectures have been reviewed. In addition to the chemical synthesis, the physical mechanisms associated with the materials design and device fabrication have been discussed for each specific application. The development and application of hierarchical complex perovskite oxide nanostructures have also been introduced in photocatalytic water remediation, gas sensing, and catalytic converter. Hierarchical nanostructures will open up many possibilities for materials design and device fabrication in environmental chemistry and technology. PMID:24790946
Hierarchically Nanostructured Materials for Sustainable Environmental Applications
Directory of Open Access Journals (Sweden)
Zheng eRen
2013-11-01
Full Text Available This article presents a comprehensive overview of the hierarchical nanostructured materials with either geometry or composition complexity in environmental applications. The hierarchical nanostructures offer advantages of high surface area, synergistic interactions and multiple functionalities towards water remediation, environmental gas sensing and monitoring as well as catalytic gas treatment. Recent advances in synthetic strategies for various hierarchical morphologies such as hollow spheres and urchin-shaped architectures have been reviewed. In addition to the chemical synthesis, the physical mechanisms associated with the materials design and device fabrication have been discussed for each specific application. The development and application of hierarchical complex perovskite oxide nanostructures have also been introduced in photocatalytic water remediation, gas sensing and catalytic converter. Hierarchical nanostructures will open up many possibilities for materials design and device fabrication in environmental chemistry and technology.
Hierarchically Nanoporous Bioactive Glasses for High Efficiency Immobilization of Enzymes
DEFF Research Database (Denmark)
He, W.; Min, D.D.; Zhang, X.D.
2014-01-01
Bioactive glasses with hierarchical nanoporosity and structures have been heavily involved in immobilization of enzymes. Because of meticulous design and ingenious hierarchical nanostructuration of porosities from yeast cell biotemplates, hierarchically nanostructured porous bioactive glasses can...
Hierarchical Matching and Regression with Application to Photometric Redshift Estimation
Murtagh, Fionn
2017-06-01
This work emphasizes that heterogeneity, diversity, discontinuity, and discreteness in data is to be exploited in classification and regression problems. A global a priori model may not be desirable. For data analytics in cosmology, this is motivated by the variety of cosmological objects such as elliptical, spiral, active, and merging galaxies at a wide range of redshifts. Our aim is matching and similarity-based analytics that takes account of discrete relationships in the data. The information structure of the data is represented by a hierarchy or tree where the branch structure, rather than just the proximity, is important. The representation is related to p-adic number theory. The clustering or binning of the data values, related to the precision of the measurements, has a central role in this methodology. If used for regression, our approach is a method of cluster-wise regression, generalizing nearest neighbour regression. Both to exemplify this analytics approach, and to demonstrate computational benefits, we address the well-known photometric redshift or `photo-z' problem, seeking to match Sloan Digital Sky Survey (SDSS) spectroscopic and photometric redshifts.
Hierarchical Bayesian parameter estimation for cumulative prospect theory
Nilsson, H.; Rieskamp, J.; Wagenmakers, E.-J.
2011-01-01
Cumulative prospect theory (CPT Tversky & Kahneman, 1992) has provided one of the most influential accounts of how people make decisions under risk. CPT is a formal model with parameters that quantify psychological processes such as loss aversion, subjective values of gains and losses, and
What is causing the phytoplankton increase in San Francisco Bay?
Cloern, J.E.; Jassby, A.D.; Schraga, T.S.; Dallas, K.L.
2006-01-01
The largest living component of San Francisco Bay is the phytoplankton, a suspension of microscopic cells that convert sunlight energy into new living biomass through the same process of photosynthesis used by land plants. This primary production is the ultimate source of food for clams, zooplankton, crabs, sardines, halibut, sturgeon, diving ducks, pelicans, and harbor seals. From measurements made in 1980, we estimated that phytoplankton primary production in San Francisco Bay was about 200,000 tons of organic carbon per year (Jassby et al. 1993). This is equivalent to producing the biomass of 5500 adult humpback whales, or the calories to feed 1.8 million people. These numbers may seem large, but primary production in San Francisco Bay is low compared to many other nutrient-enriched estuaries.
Perotti, Juan Ignacio; Caldarelli, Guido
2015-01-01
The quest for a quantitative characterization of community and modular structure of complex networks produced a variety of methods and algorithms to classify different networks. However, it is not clear if such methods provide consistent, robust and meaningful results when considering hierarchies as a whole. Part of the problem is the lack of a similarity measure for the comparison of hierarchical community structures. In this work we give a contribution by introducing the {\\it hierarchical mutual information}, which is a generalization of the traditional mutual information, and allows to compare hierarchical partitions and hierarchical community structures. The {\\it normalized} version of the hierarchical mutual information should behave analogously to the traditional normalized mutual information. Here, the correct behavior of the hierarchical mutual information is corroborated on an extensive battery of numerical experiments. The experiments are performed on artificial hierarchies, and on the hierarchical ...
Distributed Plume Source Localization Using Hierarchical Sensor Networks
Institute of Scientific and Technical Information of China (English)
KUANG Xing-hong; LIU Yu-qing; WU Yan-xiang; SHAO Hui-he
2009-01-01
A hierarchical wireless sensor networks (WSN) was proposed to estimate the plume source location. Such WSN can be of tremendous help to emergency personnel trying to protect people from terrorist attacks or responding to an accident. The entire surveillant field is divided into several small sub-regions. In each sub-region, the localization algorithm based on the improved particle filter (IPF) was performed to estimate the location. Some improved methods such as weighted centroid, residual resampling were introduced to the IPF algorithm to increase the localization performance. This distributed estimation method elirninates many drawbacks inherent with the traditional centralized optimization method. Simulation results show that localization algorithm is efficient far estimating the plume source location.
Causal Bayes Model of Mathematical Competence in Kindergarten
Directory of Open Access Journals (Sweden)
Božidar Tepeš
2016-06-01
Full Text Available In this paper authors define mathematical competences in the kindergarten. The basic objective was to measure the mathematical competences or mathematical knowledge, skills and abilities in mathematical education. Mathematical competences were grouped in the following areas: Arithmetic and Geometry. Statistical set consisted of 59 children, 65 to 85 months of age, from the Kindergarten Milan Sachs from Zagreb. The authors describe 13 variables for measuring mathematical competences. Five measuring variables were described for the geometry, and eight measuring variables for the arithmetic. Measuring variables are tasks which children solved with the evaluated results. By measuring mathematical competences the authors make causal Bayes model using free software Tetrad 5.2.1-3. Software makes many causal Bayes models and authors as experts chose the model of the mathematical competences in the kindergarten. Causal Bayes model describes five levels for mathematical competences. At the end of the modeling authors use Bayes estimator. In the results, authors describe by causal Bayes model of mathematical competences, causal effect mathematical competences or how intervention on some competences cause other competences. Authors measure mathematical competences with their expectation as random variables. When expectation of competences was greater, competences improved. Mathematical competences can be improved with intervention on causal competences. Levels of mathematical competences and the result of intervention on mathematical competences can help mathematical teachers.
Vapor Intrusion Facilities - South Bay
U.S. Environmental Protection Agency — POINT locations for the South Bay Vapor Instrusion Sites were derived from the NPL data for Region 9. One site, Philips Semiconductor, was extracted from the...
National Oceanic and Atmospheric Administration, Department of Commerce — Samples were collected from October 15, 1985 through June 12, 1987 in emergent marsh and non-vegetated habitats throughout the Lavaca Bay system to characterize...
Annual report, Bristol Bay, 1958
US Fish and Wildlife Service, Department of the Interior — Commercial fishery management activities for Bristol Bay for 1958, including lists of operators, extensive statistics, and descriptions of enforcement activities.
National Oceanic and Atmospheric Administration, Department of Commerce — Juvenile spotted seatrout and other sportfish are being monitored annually over a 6-mo period in Florida Bay to assess their abundance over time relative to...
CHWAKA BAY MANGROVE SEDIMENTS, ZANZIBAR
African Journals Online (AJOL)
Mohammed-Studies on Benthic denitriﬁcation in the Chwaka bay mangrove. Extensive mangrove ... In this case, six sediment cores were taken randomly from the three study sites as above and a ..... Academic Press. Orlando. pp. 277-293.
Annual report, Bristol Bay, 1955
US Fish and Wildlife Service, Department of the Interior — Commercial fishery management activities for Bristol Bay for 1955, including lists of operators, extensive statistics, descriptions of enforcement activities, and...
Back Bay Wilderness area description
US Fish and Wildlife Service, Department of the Interior — This document is a description of the lands located within the Back Bay National Wildlife Refuge. Within these lands, it designates which area is suitable for...
Local Component Analysis for Nonparametric Bayes Classifier
Khademi, Mahmoud; safayani, Meharn
2010-01-01
The decision boundaries of Bayes classifier are optimal because they lead to maximum probability of correct decision. It means if we knew the prior probabilities and the class-conditional densities, we could design a classifier which gives the lowest probability of error. However, in classification based on nonparametric density estimation methods such as Parzen windows, the decision regions depend on the choice of parameters such as window width. Moreover, these methods suffer from curse of dimensionality of the feature space and small sample size problem which severely restricts their practical applications. In this paper, we address these problems by introducing a novel dimension reduction and classification method based on local component analysis. In this method, by adopting an iterative cross-validation algorithm, we simultaneously estimate the optimal transformation matrices (for dimension reduction) and classifier parameters based on local information. The proposed method can classify the data with co...
Directory of Open Access Journals (Sweden)
F. Gazeau
2005-01-01
Full Text Available Planktonic and benthic incubations (bare and Posidonia oceanica vegetated sediments were performed at monthly intervals from March 2001 to October 2002 in a seagrass vegetated area of the Bay of Palma (Mallorca, Spain. Results showed a contrast between the planktonic compartment, which was on average near metabolic balance (−4.6±5.9 mmol O2 m-2 d-1 and the benthic compartment, which was autotrophic (17.6±8.5 mmol O2 m-2 d-1. During two cruises in March and June 2002, planktonic and benthic incubations were performed at several stations in the bay to estimate the whole-system metabolism and to examine its relationship with partial pressure of CO2 (pCO2 and apparent oxygen utilisation (AOU spatial patterns. Moreover, during the second cruise, when the residence time of water was long enough, net ecosystem production (NEP estimates based on incubations were compared, over the Posidonia oceanica meadow, to rates derived from dissolved inorganic carbon (DIC and oxygen (O2 mass balance budgets. These budgets provided NEP estimates in fair agreement with those derived from direct metabolic estimates based on incubated samples over the Posidonia oceanica meadow. Whereas the seagrass community was autotrophic, the excess organic carbon production therein could only balance the planktonic heterotrophy in shallow waters relative to the maximum depth of the bay (55 m. This generated a horizontal gradient from autotrophic or balanced communities in the shallow seagrass-covered areas, to strongly heterotrophic communities in deeper areas of the bay. It seems therefore that, on an annual scale in the whole bay, the organic matter production by the Posidonia oceanica may not be sufficient to fully compensate the heterotrophy of the planktonic compartment, which may require external organic carbon inputs, most likely from land.
Tools to estimate PM2.5 mass have expanded in recent years, and now include: 1) stationary monitor readings, 2) Community Multi-Scale Air Quality (CMAQ) model estimates, 3) Hierarchical Bayesian (HB) estimates from combined stationary monitor readings and CMAQ model output; and, ...
A hierarchical instrumental decision theory of nicotine dependence.
Hogarth, Lee; Troisi, Joseph R
2015-01-01
It is important to characterize the learning processes governing tobacco-seeking in order to understand how best to treat this behavior. Most drug learning theories have adopted a Pavlovian framework wherein the conditioned response is the main motivational process. We favor instead a hierarchical instrumental decision account, wherein expectations about the instrumental contingency between voluntary tobacco-seeking and the receipt of nicotine reward determines the probability of executing this behavior. To support this view, we review titration and nicotine discrimination research showing that internal signals for deprivation/satiation modulate expectations about the current incentive value of smoking, thereby modulating the propensity of this behavior. We also review research on cue-reactivity which has shown that external smoking cues modulate expectations about the probability of the tobacco-seeking response being effective, thereby modulating the propensity of this behavior. Economic decision theory is then considered to elucidate how expectations about the value and probability of response-nicotine contingency are integrated to form an overall utility estimate for that option for comparison with qualitatively different, nonsubstitute reinforcers, to determine response selection. As an applied test for this hierarchical instrumental decision framework, we consider how well it accounts for individual liability to smoking uptake and perseveration, pharmacotherapy, cue-extinction therapies, and plain packaging. We conclude that the hierarchical instrumental account is successful in reconciling this broad range of phenomenon precisely because it accepts that multiple diverse sources of internal and external information must be integrated to shape the decision to smoke.
Unified robust-Bayes multisource ambiguous data rule fusion
El-Fallah, A.; Zatezalo, A.; Mahler, R.; Mehra, R. K.
2005-05-01
The ambiguousness of human information sources and of a PRIORI human context would seem to automatically preclude the feasibility of a Bayesian approach to information fusion. We show that this is not necessarily the case, and that one can model the ambiguities associated with defining a "state" or "states of interest" of an entity. We show likewise that we can model information such as natural-language statements, and hedge against the uncertainties associated with the modeling process. Likewise a likelihood can be created that hedges against the inherent uncertainties in information generation and collection including the uncertainties created by the passage of time between information collections. As with the processing of conventional sensor information, we use the Bayes filter to produce posterior distributions from which we could extract estimates not only of the states, but also estimates of the reliability of those state-estimates. Results of testing this novel Bayes-filter information-fusion approach against simulated data are presented.
Controls on residence time and exchange in a system of shallow coastal bays
Safak, I.; Wiberg, P. L.; Richardson, D. L.; Kurum, M. O.
2015-04-01
Patterns of transport and residence time influence the morphology, ecology and biogeochemistry of shallow coastal bay systems in important ways. To better understand the factors controlling residence time and exchange in coastal bays, a three-dimensional finite-volume coastal ocean model was set up and validated with field observations of circulation in a system of 14 shallow coastal bays on the Atlantic coast of the USA (Virginia Coast Reserve). Residence times of neutrally buoyant particles as well as exchange among the bays in the system and between the bays and the ocean were examined with Lagrangian particle tracking. There was orders of magnitude variation in the calculated residence time within most of the bays, ranging from hours in the tidally refreshed (repletion) water near the inlets to days-weeks in the remaining (residual) water away from the inlets. Residence time in the repletion waters was most sensitive to the tidal phase (low vs. high) when particles were released whereas residence time in the residual waters was more sensitive to wind forcing. Wind forcing was found to act as a diffuser that shortens particle residence within the bays; its effect was higher away from the inlets and in relatively confined bays. Median residence time in the bays significantly decreased with an increase in the ratio between open water area and total area (open water plus marsh). Exchange among the bays and capture areas of inlets (i.e., exchange between the bays and the ocean) varied considerably but were insensitive to tidal phase of release, wind, and forcing conditions in different years, in contrast to the sensitivity of residence time to these factors. We defined a new quantity, termed shortest-path residence time, calculated as distance from the closest inlet divided by root-mean-square velocity at each point in model domain. A relationship between shortest-path residence time and particle-tracking residence time provides a means of estimating residence time
Hierarchically structured, nitrogen-doped carbon membranes
Wang, Hong
2017-08-03
The present invention is a structure, method of making and method of use for a novel macroscopic hierarchically structured, nitrogen-doped, nano-porous carbon membrane (HNDCMs) with asymmetric and hierarchical pore architecture that can be produced on a large-scale approach. The unique HNDCM holds great promise as components in separation and advanced carbon devices because they could offer unconventional ﬂuidic transport phenomena on the nanoscale. Overall, the invention set forth herein covers a hierarchically structured, nitrogen-doped carbon membranes and methods of making and using such a membranes.
A Model for Slicing JAVA Programs Hierarchically
Institute of Scientific and Technical Information of China (English)
Bi-Xin Li; Xiao-Cong Fan; Jun Pang; Jian-Jun Zhao
2004-01-01
Program slicing can be effectively used to debug, test, analyze, understand and maintain objectoriented software. In this paper, a new slicing model is proposed to slice Java programs based on their inherent hierarchical feature. The main idea of hierarchical slicing is to slice programs in a stepwise way, from package level, to class level, method level, and finally up to statement level. The stepwise slicing algorithm and the related graph reachability algorithms are presented, the architecture of the Java program Analyzing Tool (JATO) based on hierarchical slicing model is provided, the applications and a small case study are also discussed.
Hierarchical analysis of acceptable use policies
Directory of Open Access Journals (Sweden)
P. A. Laughton
2008-01-01
Full Text Available Acceptable use policies (AUPs are vital tools for organizations to protect themselves and their employees from misuse of computer facilities provided. A well structured, thorough AUP is essential for any organization. It is impossible for an effective AUP to deal with every clause and remain readable. For this reason, some sections of an AUP carry more weight than others, denoting importance. The methodology used to develop the hierarchical analysis is a literature review, where various sources were consulted. This hierarchical approach to AUP analysis attempts to highlight important sections and clauses dealt with in an AUP. The emphasis of the hierarchal analysis is to prioritize the objectives of an AUP.
Hierarchical modeling and analysis for spatial data
Banerjee, Sudipto; Gelfand, Alan E
2003-01-01
Among the many uses of hierarchical modeling, their application to the statistical analysis of spatial and spatio-temporal data from areas such as epidemiology And environmental science has proven particularly fruitful. Yet to date, the few books that address the subject have been either too narrowly focused on specific aspects of spatial analysis, or written at a level often inaccessible to those lacking a strong background in mathematical statistics.Hierarchical Modeling and Analysis for Spatial Data is the first accessible, self-contained treatment of hierarchical methods, modeling, and dat
Image meshing via hierarchical optimization
Institute of Scientific and Technical Information of China (English)
Hao XIE; Ruo-feng TONG‡
2016-01-01
Vector graphic, as a kind of geometric representation of raster images, has many advantages, e.g., defi nition independence and editing facility. A popular way to convert raster images into vector graphics is image meshing, the aim of which is to fi nd a mesh to represent an image as faithfully as possible. For traditional meshing algorithms, the crux of the problem resides mainly in the high non-linearity and non-smoothness of the objective, which makes it diﬃcult to fi nd a desirable optimal solution. To ameliorate this situation, we present a hierarchical optimization algorithm solving the problem from coarser levels to fi ner ones, providing initialization for each level with its coarser ascent. To further simplify the problem, the original non-convex problem is converted to a linear least squares one, and thus becomes convex, which makes the problem much easier to solve. A dictionary learning framework is used to combine geometry and topology elegantly. Then an alternating scheme is employed to solve both parts. Experiments show that our algorithm runs fast and achieves better results than existing ones for most images.
Image meshing via hierarchical optimization＊
Institute of Scientific and Technical Information of China (English)
Hao XIE; Ruo-feng TONGS
2016-01-01
Vector graphic, as a kind of geometric representation of raster images, has many advantages, e.g., definition independence and editing facility. A popular way to convert raster images into vector graphics is image meshing, the aim of which is to find a mesh to represent an image as faithfully as possible. For traditional meshing algorithms, the crux of the problem resides mainly in the high non-linearity and non-smoothness of the objective, which makes it difficult to find a desirable optimal solution. To ameliorate this situation, we present a hierarchical optimization algorithm solving the problem from coarser levels to finer ones, providing initialization for each level with its coarser ascent. To further simplify the problem, the original non-convex problem is converted to a linear least squares one, and thus becomes convex, which makes the problem much easier to solve. A dictionary learning framework is used to combine geometry and topology elegantly. Then an alternating scheme is employed to solve both parts. Experiments show that our algorithm runs fast and achieves better results than existing ones for most images.
Bayesian hierarchical modeling for detecting safety signals in clinical trials.
Xia, H Amy; Ma, Haijun; Carlin, Bradley P
2011-09-01
Detection of safety signals from clinical trial adverse event data is critical in drug development, but carries a challenging statistical multiplicity problem. Bayesian hierarchical mixture modeling is appealing for its ability to borrow strength across subgroups in the data, as well as moderate extreme findings most likely due merely to chance. We implement such a model for subject incidence (Berry and Berry, 2004 ) using a binomial likelihood, and extend it to subject-year adjusted incidence rate estimation under a Poisson likelihood. We use simulation to choose a signal detection threshold, and illustrate some effective graphics for displaying the flagged signals.
Frozen impacted drop: From fragmentation to hierarchical crack patterns
Ghabache, Elisabeth; Séon, Thomas
2016-01-01
We investigate experimentally the quenching of a liquid pancake, obtained through the impact of a water drop on a cold solid substrate ($0$ to $-60^\\circ$C). We show that, below a certain substrate temperature, fractures appear on the frozen pancake and the crack patterns change from a 2D fragmentation regime to a hierarchical fracture regime as the thermal shock is stronger. The different regimes are discussed and the transition temperatures are estimated through classical fracture scaling arguments. Finally, a phase diagram presents how these regimes can be controlled by the drop impact parameters.
Ensemble renormalization group for the random-field hierarchical model.
Decelle, Aurélien; Parisi, Giorgio; Rocchi, Jacopo
2014-03-01
The renormalization group (RG) methods are still far from being completely understood in quenched disordered systems. In order to gain insight into the nature of the phase transition of these systems, it is common to investigate simple models. In this work we study a real-space RG transformation on the Dyson hierarchical lattice with a random field, which leads to a reconstruction of the RG flow and to an evaluation of the critical exponents of the model at T=0. We show that this method gives very accurate estimations of the critical exponents by comparing our results with those obtained by some of us using an independent method.
Eccentricity evolution in hierarchical triple systems with eccentric outer binaries
Georgakarakos, Nikolaos
2014-01-01
We develop a technique for estimating the inner eccentricity in hierarchical triple systems, with the inner orbit being initially circular, while the outer one is eccentric. We consider coplanar systems with well separated components and comparable masses. The derivation of short period terms is based on an expansion of the rate of change of the Runge-Lenz vector. Then, the short period terms are combined with secular terms, obtained by means of canonical perturbation theory. The validity of the theoretical equations is tested by numerical integrations of the full equations of motion.
Digital Repository Service at National Institute of Oceanography (India)
Papa, F.; Bala, S.K.; Pandey, R.K.; Durand, F.; Gopalakrishna, V.V.; Rahman, A.; Rossow, W.B.
large sample of in situ river height measurements, we estimate the standard error of Jason-2-derived water levels over the Ganga and the Brahmaputra to be respectively of 0.28 m and 0.19 m, or less than approx 4 percent of the annual peak...
Hierarchical modeling of cluster size in wildlife surveys
Royle, J. Andrew
2008-01-01
Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).
Modeling local item dependence with the hierarchical generalized linear model.
Jiao, Hong; Wang, Shudong; Kamata, Akihito
2005-01-01
Local item dependence (LID) can emerge when the test items are nested within common stimuli or item groups. This study proposes a three-level hierarchical generalized linear model (HGLM) to model LID when LID is due to such contextual effects. The proposed three-level HGLM was examined by analyzing simulated data sets and was compared with the Rasch-equivalent two-level HGLM that ignores such a nested structure of test items. The results demonstrated that the proposed model could capture LID and estimate its magnitude. Also, the two-level HGLM resulted in larger mean absolute differences between the true and the estimated item difficulties than those from the proposed three-level HGLM. Furthermore, it was demonstrated that the proposed three-level HGLM estimated the ability distribution variance unaffected by the LID magnitude, while the two-level HGLM with no LID consideration increasingly underestimated the ability variance as the LID magnitude increased.
An Automatic Hierarchical Delay Analysis Tool
Institute of Scientific and Technical Information of China (English)
FaridMheir－El－Saadi; BozenaKaminska
1994-01-01
The performance analysis of VLSI integrated circuits(ICs) with flat tools is slow and even sometimes impossible to complete.Some hierarchical tools have been developed to speed up the analysis of these large ICs.However,these hierarchical tools suffer from a poor interaction with the CAD database and poorly automatized operations.We introduce a general hierarchical framework for performance analysis to solve these problems.The circuit analysis is automatic under the proposed framework.Information that has been automatically abstracted in the hierarchy is kept in database properties along with the topological information.A limited software implementation of the framework,PREDICT,has also been developed to analyze the delay performance.Experimental results show that hierarchical analysis CPU time and memory requirements are low if heuristics are used during the abstraction process.
Packaging glass with hierarchically nanostructured surface
He, Jr-Hau
2017-08-03
An optical device includes an active region and packaging glass located on top of the active region. A top surface of the packaging glass includes hierarchical nanostructures comprised of honeycombed nanowalls (HNWs) and nanorod (NR) structures extending from the HNWs.
Generation of hierarchically correlated multivariate symbolic sequences
Tumminello, Mi; Mantegna, R N
2008-01-01
We introduce an algorithm to generate multivariate series of symbols from a finite alphabet with a given hierarchical structure of similarities. The target hierarchical structure of similarities is arbitrary, for instance the one obtained by some hierarchical clustering procedure as applied to an empirical matrix of Hamming distances. The algorithm can be interpreted as the finite alphabet equivalent of the recently introduced hierarchically nested factor model (M. Tumminello et al. EPL 78 (3) 30006 (2007)). The algorithm is based on a generating mechanism that is different from the one used in the mutation rate approach. We apply the proposed methodology for investigating the relationship between the bootstrap value associated with a node of a phylogeny and the probability of finding that node in the true phylogeny.
HIERARCHICAL ORGANIZATION OF INFORMATION, IN RELATIONAL DATABASES
Directory of Open Access Journals (Sweden)
Demian Horia
2008-05-01
Full Text Available In this paper I will present different types of representation, of hierarchical information inside a relational database. I also will compare them to find the best organization for specific scenarios.
Hierarchical Network Design Using Simulated Annealing
DEFF Research Database (Denmark)
Thomadsen, Tommy; Clausen, Jens
2002-01-01
The hierarchical network problem is the problem of finding the least cost network, with nodes divided into groups, edges connecting nodes in each groups and groups ordered in a hierarchy. The idea of hierarchical networks comes from telecommunication networks where hierarchies exist. Hierarchical...... networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub......-algorithm uses a construction algorithm to determine edges and route the demand. Performance for different versions of the algorithm are reported in terms of runtime and quality of the solutions. The algorithm is able to find solutions of reasonable quality in approximately 1 hour for networks with 100 nodes....
When to Use Hierarchical Linear Modeling
National Research Council Canada - National Science Library
Veronika Huta
2014-01-01
Previous publications on hierarchical linear modeling (HLM) have provided guidance on how to perform the analysis, yet there is relatively little information on two questions that arise even before analysis...
An introduction to hierarchical linear modeling
National Research Council Canada - National Science Library
Woltman, Heather; Feldstain, Andrea; MacKay, J. Christine; Rocchi, Meredith
2012-01-01
This tutorial aims to introduce Hierarchical Linear Modeling (HLM). A simple explanation of HLM is provided that describes when to use this statistical technique and identifies key factors to consider before conducting this analysis...
Conservation Laws in the Hierarchical Model
Beijeren, H. van; Gallavotti, G.; Knops, H.
1974-01-01
An exposition of the renormalization-group equations for the hierarchical model is given. Attention is drawn to some properties of the spin distribution functions which are conserved under the action of the renormalization group.
Hierarchical DSE for multi-ASIP platforms
DEFF Research Database (Denmark)
Micconi, Laura; Corvino, Rosilde; Gangadharan, Deepak;
2013-01-01
This work proposes a hierarchical Design Space Exploration (DSE) for the design of multi-processor platforms targeted to specific applications with strict timing and area constraints. In particular, it considers platforms integrating multiple Application Specific Instruction Set Processors (ASIPs...
Institute of Scientific and Technical Information of China (English)
付为国; 汤涓涓; 吴沿友
2012-01-01
植物叶面积的测算对于评价生态系统初级生产力具有重要意义.本研究分别选用“最大叶长”、“最大叶宽”以及“最大叶长×最大叶宽”等指标,利用不同类型的线性或非线性回归方程,对泉州湾河口湿地主要红树植物秋茄、桐花树和白骨壤的叶面积进行测算,从而确定各自最佳拟合回归方程.结果表明:二元非线性回归方程Y=0.7297X10.8698 X2.11600、幂指数方程Y=0.9740X0.9634和Y=0.7773X 0.9954分别为秋茄、桐花树和白骨壤叶面积的最佳拟合回归方程.进一步的0-1回归检验和相对误差值分析显示,以上回归方程均能精确地估算各自的叶面积,其中,白骨壤叶面积测算更为精确.%Estimation of leaf area of plant was of great significance for the evaluation of the primary productivity of the , ecosystem. Leaf area of dominant mangrove plants, Kandelia candel, Aegiceras corniculatum, and Avicennia mari-nawere estimated with the indices of maximum leaf length, maximum leaf width, and maximum leaf length X maximum leaf width by using different types of linear or nonlinear regression equations. Then the best fitted regression e-quation for each mangrove plant was determined,respectively. The results showed that the binary non-linear regression equation Y=0. 7297X0.8598 1 X1.1600 2 was optimal fitted regression equation for estimation of leaf area of Kandelia candel.while the exponential equation Y=0. 9740X0.9634 and Y=0. 7773X0.9954 were very suitable for estimation of leaf area of Aegiceras corniculatum and Avicennia marina .respectively. Regression testing of 0 -1 and analysis of coefficient of variation showed that the above regression equation could accurately estimate their leaf areas,especially more accurate for estimation of leaf area of A. marina.
Hierarchical organization versus self-organization
Busseniers, Evo
2014-01-01
In this paper we try to define the difference between hierarchical organization and self-organization. Organization is defined as a structure with a function. So we can define the difference between hierarchical organization and self-organization both on the structure as on the function. In the next two chapters these two definitions are given. For the structure we will use some existing definitions in graph theory, for the function we will use existing theory on (self-)organization. In the t...
Hierarchical decision making for flood risk reduction
DEFF Research Database (Denmark)
Custer, Rocco; Nishijima, Kazuyoshi
2013-01-01
. In current practice, structures are often optimized individually without considering benefits of having a hierarchy of protection structures. It is here argued, that the joint consideration of hierarchically integrated protection structures is beneficial. A hierarchical decision model is utilized to analyze...... and compare the benefit of large upstream protection structures and local downstream protection structures in regard to epistemic uncertainty parameters. Results suggest that epistemic uncertainty influences the outcome of the decision model and that, depending on the magnitude of epistemic uncertainty...
Hierarchical self-organization of tectonic plates
2010-01-01
The Earth's surface is subdivided into eight large tectonic plates and many smaller ones. We reconstruct the plate tessellation history and demonstrate that both large and small plates display two distinct hierarchical patterns, described by different power-law size-relationships. While small plates display little organisational change through time, the structure of the large plates oscillate between minimum and maximum hierarchical tessellations. The organization of large plates rapidly chan...
Angelic Hierarchical Planning: Optimal and Online Algorithms
2008-12-06
restrict our attention to plans in I∗(Act, s0). Definition 2. ( Parr and Russell , 1998) A plan ah∗ is hierarchically optimal iff ah∗ = argmina∈I∗(Act,s0):T...Murdock, Dan Wu, and Fusun Yaman. SHOP2: An HTN planning system. JAIR, 20:379–404, 2003. Ronald Parr and Stuart Russell . Reinforcement Learning with...Angelic Hierarchical Planning: Optimal and Online Algorithms Bhaskara Marthi Stuart J. Russell Jason Wolfe Electrical Engineering and Computer
Hierarchical Needs, Income Comparisons and Happiness Levels
Drakopoulos, Stavros
2011-01-01
The cornerstone of the hierarchical approach is that there are some basic human needs which must be satisfied before non-basic needs come into the picture. The hierarchical structure of needs implies that the satisfaction of primary needs provides substantial increases to individual happiness compared to the subsequent satisfaction of secondary needs. This idea can be combined with the concept of comparison income which means that individuals compare rewards with individuals with similar char...
Evaluating Hierarchical Structure in Music Annotations.
McFee, Brian; Nieto, Oriol; Farbood, Morwaread M; Bello, Juan Pablo
2017-01-01
Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR), it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for "flat" descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.
Evaluating Hierarchical Structure in Music Annotations
Directory of Open Access Journals (Sweden)
Brian McFee
2017-08-01
Full Text Available Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR, it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for “flat” descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.
Hierarchical Nanoceramics for Industrial Process Sensors
Energy Technology Data Exchange (ETDEWEB)
Ruud, James, A.; Brosnan, Kristen, H.; Striker, Todd; Ramaswamy, Vidya; Aceto, Steven, C.; Gao, Yan; Willson, Patrick, D.; Manoharan, Mohan; Armstrong, Eric, N., Wachsman, Eric, D.; Kao, Chi-Chang
2011-07-15
This project developed a robust, tunable, hierarchical nanoceramics materials platform for industrial process sensors in harsh-environments. Control of material structure at multiple length scales from nano to macro increased the sensing response of the materials to combustion gases. These materials operated at relatively high temperatures, enabling detection close to the source of combustion. It is anticipated that these materials can form the basis for a new class of sensors enabling widespread use of efficient combustion processes with closed loop feedback control in the energy-intensive industries. The first phase of the project focused on materials selection and process development, leading to hierarchical nanoceramics that were evaluated for sensing performance. The second phase focused on optimizing the materials processes and microstructures, followed by validation of performance of a prototype sensor in a laboratory combustion environment. The objectives of this project were achieved by: (1) synthesizing and optimizing hierarchical nanostructures; (2) synthesizing and optimizing sensing nanomaterials; (3) integrating sensing functionality into hierarchical nanostructures; (4) demonstrating material performance in a sensing element; and (5) validating material performance in a simulated service environment. The project developed hierarchical nanoceramic electrodes for mixed potential zirconia gas sensors with increased surface area and demonstrated tailored electrocatalytic activity operable at high temperatures enabling detection of products of combustion such as NOx close to the source of combustion. Methods were developed for synthesis of hierarchical nanostructures with high, stable surface area, integrated catalytic functionality within the structures for gas sensing, and demonstrated materials performance in harsh lab and combustion gas environments.
Abundance of walruses in Eastern Baffin Bay and Davis Strait
Directory of Open Access Journals (Sweden)
Mads Peter Heide-Jørgensen
2014-12-01
Full Text Available Walruses (Odobenus rosmarus are exploited for subsistence purposes in West Greenland. However, current information about the abundance of walruses subject to harvest in eastern Baffin Bay subject to harvest has been unavailable despite being critical for maintaining sustainable catch levels. Three visual aerial surveys were conducted in 2006 (21 March to 19 April 2006, 2008 (3 to 12 April and 2012 (24 March to 14 April to estimate the number of walruses on the wintering grounds in eastern Baffin Bay and Davis Strait. Data on the fraction of walruses that were submerged below a 2m detection threshold during the surveys were obtained from 24 walruses instrumented with satellite-linked-time-depth-recorders in northern Baffin Bay in May-June 2010-2012. An availability correction factor was estimated at 36.5% (cv=0.08 after filtering of data for an observed drift of the pressure transducer of more than 2.5 m. The surveys resulted in walrus abundance estimates that were corrected for walruses submerged below a detection threshold and for walruses that were missed by the observers. The estimates of abundance were 1,105 (cv=0.31, 95% CI 610-2,002 in 2006, 1,137 (0.48, 468-2,758 in 2008 and 1,408 (0.22, 922-2,150 in 2012.
Estimation of sea-air CO2 flux in seaweed aquaculture area, Lidao Bay%大型藻类规模化养殖水域海-气界面CO2交换通量估算
Institute of Scientific and Technical Information of China (English)
蒋增杰; 方建光; 韩婷婷; 李加琦; 毛玉泽; 王巍
2013-01-01
选择山东俚岛湾大型藻类养殖水域作为研究区域,根据2011年4、8、10月和2012年1月4个航次的大面调查获得的pH、总碱度(TA)、叶绿素a等基础数据,分析了该区域表层海水溶解无机碳(DIC)体系各分量的浓度、组成比例及时空变化特征,估算了海-气界面CO2的交换通量.结果表明,该区域表层海水DIC、HCO3-、CO32-及CO2的年平均浓度分别为2 024.8±147.0、1 842.4±132.1、170.0±42.8和12.4±2.5μmol/L.养殖区与非养殖区之间DIC、HCO3-浓度差异不显著(P＞0.05),而CO2浓度差异极显著(P＜0.01).表层海水pCO2和海-气界面CO2的交换通量的年平均值分别为287.8±37.9 μatm和-32.7±17.2 mmol/m2·d,养殖区与非养殖区之间、不同季节之间均差异极显著(P＜0.01).大型藻类的养殖活动有利于海洋对大气CO2的吸收.%In order to assess the effect of seaweed aquaculture on sea-air CO2 flux, a large-scale seaweed aquaculture area which is located in Lidao Bay, was selected as the investigation area. Based on the investigation data of pH, total alkalinity (TA), Chl-a, etc. During four cruises from April 2011 to January 2012, the spatial and seasonal variations of dissolved inorganic carbon (DIC) system parameters and aqueous pCO2 were investigated. Results showed that the mean annual concentrations of DIC, HCO3-,CO32- and CO2 were 2 024. 8 ± 147.0μmol/ L, 1 842. 4 ± 132. 1 μmol/L, 170. 0 ± 42. 8μmol/L and 12. 4 ± 2. 5 μmol/L, respectively. There were no significant differences between areas in concentrations of DIC and HCO3- (P> 0. 05) , while the differences for the concentration of CO2 were highly significant(P< 0. 01). The mean annual values of aqueous pCO2 and sea-air CO2 flux were 287. 8 ± 37. 9 juatm and —32. 7 ± 17. 2 μmol/m2 · d, respectively. There were highly significant differences (P<0. 01) for aqueous pCO2 and sea-air CO2 flux not only between different areas, but also between different seasons. Seaweed
HIERARCHICAL OPTIMIZATION MODEL ON GEONETWORK
Directory of Open Access Journals (Sweden)
Z. Zha
2012-07-01
Full Text Available In existing construction experience of Spatial Data Infrastructure (SDI, GeoNetwork, as the geographical information integrated solution, is an effective way of building SDI. During GeoNetwork serving as an internet application, several shortcomings are exposed. The first one is that the time consuming of data loading has been considerately increasing with the growth of metadata count. Consequently, the efficiency of query and search service becomes lower. Another problem is that stability and robustness are both ruined since huge amount of metadata. The final flaw is that the requirements of multi-user concurrent accessing based on massive data are not effectively satisfied on the internet. A novel approach, Hierarchical Optimization Model (HOM, is presented to solve the incapability of GeoNetwork working with massive data in this paper. HOM optimizes the GeoNetwork from these aspects: internal procedure, external deployment strategies, etc. This model builds an efficient index for accessing huge metadata and supporting concurrent processes. In this way, the services based on GeoNetwork can maintain stable while running massive metadata. As an experiment, we deployed more than 30 GeoNetwork nodes, and harvest nearly 1.1 million metadata. From the contrast between the HOM-improved software and the original one, the model makes indexing and retrieval processes more quickly and keeps the speed stable on metadata amount increasing. It also shows stable on multi-user concurrent accessing to system services, the experiment achieved good results and proved that our optimization model is efficient and reliable.
75 FR 36292 - Safety Zone; Bay Swim III, Presque Isle Bay, Erie, PA
2010-06-25
... of Presque Isle Bay, Lake Erie, near Erie, Pennsylvania between 9 a.m. to 11 a.m. on June 26, 2010.... The safety zone will encompass specified waters of Presque Isle Bay, Erie, Pennsylvania starting at... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; Bay Swim III, Presque Isle Bay, Erie, PA...
77 FR 18739 - Safety Zone; Bay Swim V, Presque Isle Bay, Erie, PA
2012-03-28
... the January 17, 2008, issue of the Federal Register (73 FR 3316). Public Meeting We do not now plan to... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; Bay Swim V, Presque Isle Bay, Erie, PA... is intended to restrict vessels from a portion of the Presque Island Bay during the Bay Swim...
77 FR 35860 - Safety Zone; Bay Swim V, Presque Isle Bay, Erie, PA
2012-06-15
..., Erie, PA in the Federal Register (77 FR 18739). We received no letters commenting on the proposed rule... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; Bay Swim V, Presque Isle Bay, Erie, PA... restrict vessels from a portion of the Presque Island Bay during the Bay Swim V swimming event. The...
78 FR 34575 - Safety Zone; Bay Swim VI, Presque Isle Bay, Erie, PA
2013-06-10
... FR Federal Register NPRM Notice of Proposed Rulemaking TFR Temporary Final Rule A. Regulatory History... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; Bay Swim VI, Presque Isle Bay, Erie, PA... portion of Presque Isle bay during the Bay Swim VI swimming event. This temporary safety zone is...
Directory of Open Access Journals (Sweden)
Marta J. Cremer
2008-09-01
Full Text Available Pontoporia blainvillei (Gervais & d'Orbigny, 1844 is threatened throughout its distribution. The species can be found year-round in the Babitonga bay estuary (26º 02'-26º 28'S and 48º28'-48º50'W, in the north coast of the state of Santa Catarina, Brazil. Boat surveys were conducted in order to evaluate its abundance and density between 2000 and 2003. Sampling was random and stratified, with 46 transects in five sub-areas, comprising a total area of 160 km². Data collection was conducted following the linear transect method with distance sampling. A total of 1174.7 km was scanned and 38 groups were observed. Franciscanas were not uniformly distributed in Babitonga bay. Group size ranged from one to 13 animals (mean ± SD = 5.02 ± 3.62. Model 1 (Half-Normal showed the best fit to the data. The estimated population size was 50 animals and the density was 0.32 individuals km-2. Density estimates evaluated in the sub-areas where franciscanas occurred resulted in a density of 0.46 individuals km-2. Monitoring this population is of considerable importance due to the constant threats that this species faces in this bay.Pontoporia blainvillei (Gervais & d'Orbigny, 1844 ocorre ao longo de todo o ano no estuário da baía da Babitonga, no litoral norte de Santa Catarina, sul do Brasil. Foram realizadas amostragens com o objetivo de obter informações sobre sua abundância e densidade populacional nesta área entre os anos de 2000 e 2003. A amostragem foi aleatória e estratificada, com 46 transecções estabelecidas em cinco grandes sub-áreas, compreendendo 160 km². A coleta de dados foi conduzida utilizando o método de transecções lineares com amostragem de distância. Foram percorridos 1174,7 km e 38 grupos foram registrados. As franciscanas não apresentaram uma distribuição uniforme na baía da Babitonga. O tamanho de grupo variou de um a 13 animais (5,02 ± 3,62. O Modelo 1 (Meio-Normal promoveu o melhor ajustamento dos parâmetros. A
Humboldt Bay, California Benthic Habitats 2009 Substrate
National Oceanic and Atmospheric Administration, Department of Commerce — Humboldt Bay is the largest estuary in California north of San Francisco Bay and represents a significant resource for the north coast region. Beginning in 2007 the...
Humboldt Bay, California Benthic Habitats 2009 Geoform
National Oceanic and Atmospheric Administration, Department of Commerce — Humboldt Bay is the largest estuary in California north of San Francisco Bay and represents a significant resource for the north coast region. Beginning in 2007 the...
Humboldt Bay, California Benthic Habitats 2009 Geodatabase
National Oceanic and Atmospheric Administration, Department of Commerce — Humboldt Bay is the largest estuary in California north of San Francisco Bay and represents a significant resource for the north coast region. Beginning in 2007 the...
Humboldt Bay, California Benthic Habitats 2009 Biotic
National Oceanic and Atmospheric Administration, Department of Commerce — Humboldt Bay is the largest estuary in California north of San Francisco Bay and represents a significant resource for the north coast region. Beginning in 2007 the...
SF Bay Water Quality Improvement Fund
EPAs grant program to protect and restore San Francisco Bay. The San Francisco Bay Water Quality Improvement Fund (SFBWQIF) has invested in 58 projects along with 70 partners contributing to restore wetlands, water quality, and reduce polluted runoff.,
Humboldt Bay Benthic Habitats 2009 Aquatic Setting
National Oceanic and Atmospheric Administration, Department of Commerce — Humboldt Bay is the largest estuary in California north of San Francisco Bay and represents a significant resource for the north coast region. Beginning in 2007 the...
Mercury-contaminated sediments in the North Bay: A legacy of the Gold Rush
Jaffe, Bruce E.
2001-01-01
A legacy of the Gold Rush is mercury-contaminated sediments in the Bay. Miners used mercury to extract gold from tailings during the gold rush. A large amount of this mercury (some estimates are as great as 10,000 tons) was lost during extraction to the watershed during the gold rush era. This mercury-contaminated hydraulic mining debris made its way to the Bay.
Hansen, Joakim P.; Wikström, Sofia A.; Kautsky, Lena
2008-04-01
Shallow bays with soft sediment bottoms are common habitats along the Swedish and Finnish Baltic Sea coastline. These bays undergo a process of geomorphometric evolution with the natural isostatic land-uplift process, whereby open bays and sounds decrease in depth and are gradually isolated from the sea, forming bays with narrow openings. This study tested the relationship between the morphometric isolation of the bays from the sea and the macroinvertebrate fauna community of these bays. Additionally, we tested the specific role of the submerged vegetation as an indicator of the macroinvertebrate fauna community. We chose two environmental factors for the analyses, water exchange of the bays and the taxon richness of the macroflora in the bays. We found a hierarchical relationship between water exchange, flora taxon richness, and fauna biomass and taxon richness using structural equation modelling: decreased biomass and taxon richness of fauna were related to decreased flora taxon richness, which in turn was related to decreased water exchange. Using multivariate redundancy analysis, the two environmental factors included in the model were found to explain 47.7% of the variation in the fauna taxon composition and 57.5% of the variation in the functional feeding groups of the fauna. Along the morphometric isolation gradient of the bays, the fauna assemblages changed from a community dominated by gastropods, bivalves, and crustaceans, to a community mainly consisting of a few insect taxa. Moreover, the proportion of predators, gathering collectors, and shredders increased while that of filtering collectors and scrapers decreased. Our results indicate that the density and taxon richness of macroinvertebrate fauna are higher in less morphometrically isolated bays than in more isolated bays in the Baltic Sea. Furthermore, we suggest that the taxon richness of macroflora can serve as an indicator of the fauna community.
Bayes linear statistics, theory & methods
Goldstein, Michael
2007-01-01
Bayesian methods combine information available from data with any prior information available from expert knowledge. The Bayes linear approach follows this path, offering a quantitative structure for expressing beliefs, and systematic methods for adjusting these beliefs, given observational data. The methodology differs from the full Bayesian methodology in that it establishes simpler approaches to belief specification and analysis based around expectation judgements. Bayes Linear Statistics presents an authoritative account of this approach, explaining the foundations, theory, methodology, and practicalities of this important field. The text provides a thorough coverage of Bayes linear analysis, from the development of the basic language to the collection of algebraic results needed for efficient implementation, with detailed practical examples. The book covers:The importance of partial prior specifications for complex problems where it is difficult to supply a meaningful full prior probability specification...
33 CFR 165.1122 - San Diego Bay, Mission Bay and their Approaches-Regulated navigation area.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false San Diego Bay, Mission Bay and... Coast Guard District § 165.1122 San Diego Bay, Mission Bay and their Approaches—Regulated navigation... waters of San Diego Bay, Mission Bay, and their approaches encompassed by a line commencing at Point La...
Wei Wu; James Clark; James Vose
2010-01-01
Hierarchical Bayesian (HB) modeling allows for multiple sources of uncertainty by factoring complex relationships into conditional distributions that can be used to draw inference and make predictions. We applied an HB model to estimate the parameters and state variables of a parsimonious hydrological model â GR4J â by coherently assimilating the uncertainties from the...
Strayhorn, Terrell Lamont
2008-01-01
The present study estimated the influence of academic and social collegiate experiences on Latino students' sense of belonging, controlling for background differences, using hierarchical analysis techniques with a nested design. In addition, results were compared between Latino students and their White counterparts. Findings reveal that grades,…
Raykov, Tenko
2011-01-01
Interval estimation of intraclass correlation coefficients in hierarchical designs is discussed within a latent variable modeling framework. A method accomplishing this aim is outlined, which is applicable in two-level studies where participants (or generally lower-order units) are clustered within higher-order units. The procedure can also be…
Missing Data Treatments at the Second Level of Hierarchical Linear Models
St. Clair, Suzanne W.
2011-01-01
The current study evaluated the performance of traditional versus modern MDTs in the estimation of fixed-effects and variance components for data missing at the second level of an hierarchical linear model (HLM) model across 24 different study conditions. Variables manipulated in the analysis included, (a) number of Level-2 variables with missing…
About wave field modeling in hierarchic medium with fractal inclusions
Hachay, Olga; Khachay, Andrey
2014-05-01
The processes of oil gaseous deposits outworking are linked with moving of polyphase multicomponent media, which are characterized by no equilibrium and nonlinear rheological features. The real behavior of layered systems is defined as complicated rheology moving liquids and structural morphology of porous media. It is eargently needed to account those factors for substantial description of the filtration processes. Additionally we must account also the synergetic effects. That allows suggesting new methods of control and managing of complicated natural systems, which can research these effects. Thus our research is directed to the layered system, from which we have to outwork oil and which is a complicated hierarchic dynamical system with fractal inclusions. In that paper we suggest the algorithm of modeling of 2-d seismic field distribution in the heterogeneous medium with hierarchic inclusions. Also we can compare the integral 2-D for seismic field in a frame of local hierarchic heterogeneity with a porous inclusion and pure elastic inclusion for the case when the parameter Lame is equal to zero for the inclusions and the layered structure. For that case we can regard the problem for the latitude and longitudinal waves independently. Here we shall analyze the first case. The received results can be used for choosing criterions of joined seismic methods for high complicated media research.If the boundaries of the inclusion of the k rank are fractals, the surface and contour integrals in the integral equations must be changed to repeated fractional integrals of Riman-Liuvill type .Using the developed earlier 3-d method of induction electromagnetic frequency geometric monitoring we showed the opportunity of defining of physical and structural features of hierarchic oil layer structure and estimating of water saturating by crack inclusions. For visualization we had elaborated some algorithms and programs for constructing cross sections for two hierarchic structural
Ecological risk assessment of TBT in Ise Bay.
Yamamoto, Joji; Yonezawa, Yoshitaka; Nakata, Kisaburo; Horiguchi, Fumio
2009-02-01
An ecological risk assessment of tributyltin (TBT) in Ise Bay was conducted using the margin of exposure (MOE) method. The assessment endpoint was defined to protect the survival, growth and reproduction of marine organisms. Sources of TBT in this study were assumed to be commercial vessels in harbors and navigation routes. Concentrations of TBT in Ise Bay were estimated using a three-dimensional hydrodynamic model, an ecosystem model and a chemical fate model. Estimated MOEs for marine organisms for 1990 and 2008 were approximately 0.1-2.0 and over 100 respectively, indicating a declining temporal trend in the probability of adverse effects. The chemical fate model predicts a much longer persistence of TBT in sediments than in the water column. Therefore, it is necessary to monitor the harmful effects of TBT on benthic organisms.
Polonium 210Po in the phytobenthos from Puck Bay.
Skwarzec, B; Ulatowski, J; Strumińska, D I; Falandysz, J
2003-04-01
The aim of the work was to determine the 210Po content in phytobenthos species (seaweeds and angiosperms) from Puck Bay (southern Baltic). Alpha spectrometry was used to measure and calculate the activities and concentrations of polonium 210Po in the phytobenthos. The activity of 210Po in Puck Bay waters was determined to estimate the bioconcentration factors (BCF) of these plants. The 210Po concentration in water was estimated at 0.25 mBq dm(-3). The lowest polonium concentration in the phytobenthos was found in Cladophora rupestris (0.12 Bq kg(-1) wet wt.), the highest in Chara crinita (1.12 Bq kg(-1) wet wt.). Polonium is accumulated in these phytobenthos species; the bioconcentration factors (BCF) ranged from 450 to 4400.
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false West Bay 117.622 Section 117.622 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Massachusetts § 117.622 West Bay The draw of the West Bay Bridge, mile...
Bayes' postulate for trinomial trials
Diniz, M. A.; Polpo, A.
2012-10-01
In this paper, we discuss Bayes' postulate and its interpretation. We extend the binomial trial method proposed by de Finetti [1] to trinomial trials, for which we argue that the consideration of equiprobability a priori for the possible outcomes of the trinomial trials implies that the parameter vector has Dirichlet(1,1) as prior. Based on this result, we agree with Stigler [2] in that the notion in Bayes' postulate stating "absolutely know nothing" is related to the possible outcomes of an experiment and not to "non-information" about the parameter.
A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes
D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)
2005-01-01
textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate
A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes
D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)
2005-01-01
textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate effec
Bayes Tutorial Using R and JAGS (Briefing Charts)
2015-05-12
Bayesian statistics 8 What is Bayesian data analysis? Why Bayes? Effective and flexible Combine information from different sources...T(y,thetak) - T(y.repk,thetak) Compare this distribution to 0 63 Model comparison: DIC (deviance information criterion ) Generalization...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources
Institute of Scientific and Technical Information of China (English)
陈敏; 韦来生
2014-01-01
在线性模型中回归系数与误差方差具有正态-逆Gamma先验分布,且假定设计阵非列满秩的情形下,导出了回归系数的可估函数与误差方差同时的Bayes估计.分别在均方误差矩阵(MSEM)准则和Bayes Pitman Closeness (BPC)准则下,研究了回归系数可估函数的Bayes估计相对于最小二乘(LS)估计的优良性,讨论了误差方差的B ayes估计在均方误差(MSE)准则下相对于LS估计的优良性.
Impact of climate variability on an east Australian bay
Gräwe, U.; Wolff, J.-O.; Ribbe, J.
2010-01-01
The climate along the subtropical east coast of Australia is changing significantly. Rainfall has decreased by about 50 mm per decade and temperature increased by about 0.1 °C per decade during the last 50 years. These changes are likely to impact upon episodes of hypersalinity and the persistence of inverse circulations, which are often characteristic features of the coastal zone in the subtropics and are controlled by the balance between evaporation, precipitation, and freshwater discharge. In this study, observations and results from a general ocean circulation model are used to investigate how current climate trends have impacted upon the physical characteristics of the Hervey Bay, Australia. During the last two decades, mean precipitation in Hervey Bay deviates by 13% from the climatology (1941-2000). In the same time, the river discharge is reduced by 23%. In direct consequence, the frequency of hypersaline and inverse conditions has increased. Moreover, the salinity flux out of the bay has increased and the evaporation induced residual circulation has accelerated. Contrary to the drying trend, the occurrence of severe rainfalls, associated with floods, leads to short-term fluctuations in the salinity. These freshwater discharge events are used to estimate a typical response time for the bay.
BayeSED: A General Approach to Fitting the Spectral Energy Distribution of Galaxies
Han, Yunkun; Han, Zhanwen
2014-11-01
We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large Ks -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has been performed for the first time. We found that the 2003 model by Bruzual & Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the Ks -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.
Self-assembled biomimetic superhydrophobic hierarchical arrays.
Yang, Hongta; Dou, Xuan; Fang, Yin; Jiang, Peng
2013-09-01
Here, we report a simple and inexpensive bottom-up technology for fabricating superhydrophobic coatings with hierarchical micro-/nano-structures, which are inspired by the binary periodic structure found on the superhydrophobic compound eyes of some insects (e.g., mosquitoes and moths). Binary colloidal arrays consisting of exemplary large (4 and 30 μm) and small (300 nm) silica spheres are first assembled by a scalable Langmuir-Blodgett (LB) technology in a layer-by-layer manner. After surface modification with fluorosilanes, the self-assembled hierarchical particle arrays become superhydrophobic with an apparent water contact angle (CA) larger than 150°. The throughput of the resulting superhydrophobic coatings with hierarchical structures can be significantly improved by templating the binary periodic structures of the LB-assembled colloidal arrays into UV-curable fluoropolymers by a soft lithography approach. Superhydrophobic perfluoroether acrylate hierarchical arrays with large CAs and small CA hysteresis can be faithfully replicated onto various substrates. Both experiments and theoretical calculations based on the Cassie's dewetting model demonstrate the importance of the hierarchical structure in achieving the final superhydrophobic surface states. Copyright © 2013 Elsevier Inc. All rights reserved.
Analysis hierarchical model for discrete event systems
Ciortea, E. M.
2015-11-01
The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.
Hierarchical models and chaotic spin glasses
Berker, A. Nihat; McKay, Susan R.
1984-09-01
Renormalization-group studies in position space have led to the discovery of hierarchical models which are exactly solvable, exhibiting nonclassical critical behavior at finite temperature. Position-space renormalization-group approximations that had been widely and successfully used are in fact alternatively applicable as exact solutions of hierarchical models, this realizability guaranteeing important physical requirements. For example, a hierarchized version of the Sierpiriski gasket is presented, corresponding to a renormalization-group approximation which has quantitatively yielded the multicritical phase diagrams of submonolayers on graphite. Hierarchical models are now being studied directly as a testing ground for new concepts. For example, with the introduction of frustration, chaotic renormalization-group trajectories were obtained for the first time. Thus, strong and weak correlations are randomly intermingled at successive length scales, and a new microscopic picture and mechanism for a spin glass emerges. An upper critical dimension occurs via a boundary crisis mechanism in cluster-hierarchical variants developed to have well-behaved susceptibilities.
Low-Level Hierarchical Multiscale Segmentation Statistics of Natural Images.
Akbas, Emre; Ahuja, Narendra
2014-09-01
This paper is aimed at obtaining the statistics as a probabilistic model pertaining to the geometric, topological and photometric structure of natural images. The image structure is represented by its segmentation graph derived from the low-level hierarchical multiscale image segmentation. We first estimate the statistics of a number of segmentation graph properties from a large number of images. Our estimates confirm some findings reported in the past work, as well as provide some new ones. We then obtain a Markov random field based model of the segmentation graph which subsumes the observed statistics. To demonstrate the value of the model and the statistics, we show how its use as a prior impacts three applications: image classification, semantic image segmentation and object detection.
Comparison of hierarchical and six degrees-of-freedom marker sets in analyzing gait kinematics.
Schmitz, Anne; Buczek, Frank L; Bruening, Dustin; Rainbow, Michael J; Cooney, Kevin; Thelen, Darryl
2016-01-01
The objective of this study was to determine how marker spacing, noise, and joint translations affect joint angle calculations using both a hierarchical and a six degrees-of-freedom (6DoF) marker set. A simple two-segment model demonstrates that a hierarchical marker set produces biased joint rotation estimates when sagittal joint translations occur whereas a 6DoF marker set mitigates these bias errors with precision improving with increased marker spacing. These effects were evident in gait simulations where the 6DoF marker set was shown to be more accurate at tracking axial rotation angles at the hip, knee, and ankle.
Backscatter imagery in Jobos Bay
National Oceanic and Atmospheric Administration, Department of Commerce — This image represents a 1x1 meter resolution backscatter mosaic of Jobos Bay, Puerto Rico (in NAD83 UTM 19 North). The backscatter values are in relative 8-bit (0 ...
On the analyticity of the pressure in the hierarchical dipole gas
Energy Technology Data Exchange (ETDEWEB)
Benfatto, G.; Gallavotti, G.; Nicolo, F. (Universita dell' Aquila (Italy))
1989-05-01
The authors attempt to prove, by the direct estimation of the convergence radius, the convergence of the Mayer expansion for the dipole gas, with the aim of developing techniques eventually suitable to prove the often conjectured convergence of the Mayer expansion for the two-dimensional Coulomb gas at low temperature. The treatment stems from their technique for sharp estimates on the truncated expectations for a hierarchical dipole gas model.
Inferring on the intentions of others by hierarchical Bayesian learning.
Directory of Open Access Journals (Sweden)
Andreea O Diaconescu
2014-09-01
Full Text Available Inferring on others' (potentially time-varying intentions is a fundamental problem during many social transactions. To investigate the underlying mechanisms, we applied computational modeling to behavioral data from an economic game in which 16 pairs of volunteers (randomly assigned to "player" or "adviser" roles interacted. The player performed a probabilistic reinforcement learning task, receiving information about a binary lottery from a visual pie chart. The adviser, who received more predictive information, issued an additional recommendation. Critically, the game was structured such that the adviser's incentives to provide helpful or misleading information varied in time. Using a meta-Bayesian modeling framework, we found that the players' behavior was best explained by the deployment of hierarchical learning: they inferred upon the volatility of the advisers' intentions in order to optimize their predictions about the validity of their advice. Beyond learning, volatility estimates also affected the trial-by-trial variability of decisions: participants were more likely to rely on their estimates of advice accuracy for making choices when they believed that the adviser's intentions were presently stable. Finally, our model of the players' inference predicted the players' interpersonal reactivity index (IRI scores, explicit ratings of the advisers' helpfulness and the advisers' self-reports on their chosen strategy. Overall, our results suggest that humans (i employ hierarchical generative models to infer on the changing intentions of others, (ii use volatility estimates to inform decision-making in social interactions, and (iii integrate estimates of advice accuracy with non-social sources of information. The Bayesian framework presented here can quantify individual differences in these mechanisms from simple behavioral readouts and may prove useful in future clinical studies of maladaptive social cognition.
Inferring on the intentions of others by hierarchical Bayesian learning.
Diaconescu, Andreea O; Mathys, Christoph; Weber, Lilian A E; Daunizeau, Jean; Kasper, Lars; Lomakina, Ekaterina I; Fehr, Ernst; Stephan, Klaas E
2014-09-01
Inferring on others' (potentially time-varying) intentions is a fundamental problem during many social transactions. To investigate the underlying mechanisms, we applied computational modeling to behavioral data from an economic game in which 16 pairs of volunteers (randomly assigned to "player" or "adviser" roles) interacted. The player performed a probabilistic reinforcement learning task, receiving information about a binary lottery from a visual pie chart. The adviser, who received more predictive information, issued an additional recommendation. Critically, the game was structured such that the adviser's incentives to provide helpful or misleading information varied in time. Using a meta-Bayesian modeling framework, we found that the players' behavior was best explained by the deployment of hierarchical learning: they inferred upon the volatility of the advisers' intentions in order to optimize their predictions about the validity of their advice. Beyond learning, volatility estimates also affected the trial-by-trial variability of decisions: participants were more likely to rely on their estimates of advice accuracy for making choices when they believed that the adviser's intentions were presently stable. Finally, our model of the players' inference predicted the players' interpersonal reactivity index (IRI) scores, explicit ratings of the advisers' helpfulness and the advisers' self-reports on their chosen strategy. Overall, our results suggest that humans (i) employ hierarchical generative models to infer on the changing intentions of others, (ii) use volatility estimates to inform decision-making in social interactions, and (iii) integrate estimates of advice accuracy with non-social sources of information. The Bayesian framework presented here can quantify individual differences in these mechanisms from simple behavioral readouts and may prove useful in future clinical studies of maladaptive social cognition.
Directory of Open Access Journals (Sweden)
A. V. Borges
2004-10-01
Full Text Available The relationship between whole-system metabolism estimates based on planktonic and benthic incubations (bare sediments and seagrass, Posidonia oceanica meadows, and CO2 fluxes across the air-sea interface were examined in the Bay of Palma (Mallorca, Spain during two cruises in March and June 2002. Moreover, planktonic and benthic incubations were performed at monthly intervals from March 2001 to October 2002 in a seagrass vegetated area of the bay. From the annual study, results showed a contrast between the planktonic compartment, which was heterotrophic during most of the year, except for occasional bloom episodes, and the benthic compartment, which was slightly autotrophic. Whereas the seagrass community was autotrophic, the excess organic carbon production therein could only balance the excess respiration of the planktonic compartment in shallow waters (2 fields and fluxes across the bay observed during the two extensive cruises in 2002. Finally, dissolved inorganic carbon and oxygen budgets provided NEP estimates in fair agreement with those derived from direct metabolic estimates based on incubated samples over the Posidonia oceanica meadow.
Biased trapping issue on weighted hierarchical networks
Indian Academy of Sciences (India)
Meifeng Dai; Jie Liu; Feng Zhu
2014-10-01
In this paper, we present trapping issues of weight-dependent walks on weighted hierarchical networks which are based on the classic scale-free hierarchical networks. Assuming that edge’s weight is used as local information by a random walker, we introduce a biased walk. The biased walk is that a walker, at each step, chooses one of its neighbours with a probability proportional to the weight of the edge. We focus on a particular case with the immobile trap positioned at the hub node which has the largest degree in the weighted hierarchical networks. Using a method based on generating functions, we determine explicitly the mean first-passage time (MFPT) for the trapping issue. Let parameter (0 < < 1) be the weight factor. We show that the efficiency of the trapping process depends on the parameter a; the smaller the value of a, the more efficient is the trapping process.
Improving broadcast channel rate using hierarchical modulation
Meric, Hugo; Arnal, Fabrice; Lesthievent, Guy; Boucheret, Marie-Laure
2011-01-01
We investigate the design of a broadcast system where the aim is to maximise the throughput. This task is usually challenging due to the channel variability. Forty years ago, Cover introduced and compared two schemes: time sharing and superposition coding. The second scheme was proved to be optimal for some channels. Modern satellite communications systems such as DVB-SH and DVB-S2 mainly rely on time sharing strategy to optimize throughput. They consider hierarchical modulation, a practical implementation of superposition coding, but only for unequal error protection or backward compatibility purposes. We propose in this article to combine time sharing and hierarchical modulation together and show how this scheme can improve the performance in terms of available rate. We present the gain on a simple channel modeling the broadcasting area of a satellite. Our work is applied to the DVB-SH standard, which considers hierarchical modulation as an optional feature.
Incentive Mechanisms for Hierarchical Spectrum Markets
Iosifidis, George; Alpcan, Tansu; Koutsopoulos, Iordanis
2011-01-01
We study spectrum allocation mechanisms in hierarchical multi-layer markets which are expected to proliferate in the near future based on the current spectrum policy reform proposals. We consider a setting where a state agency sells spectrum to Primary Operators (POs) and in turn these resell it to Secondary Operators (SOs) through auctions. We show that these hierarchical markets do not result in a socially efficient spectrum allocation which is aimed by the agency, due to lack of coordination among the entities in different layers and the inherently selfish revenue-maximizing strategy of POs. In order to reconcile these opposing objectives, we propose an incentive mechanism which aligns the strategy and the actions of the POs with the objective of the agency, and thus it leads to system performance improvement in terms of social welfare. This pricing based mechanism constitutes a method for hierarchical market regulation and requires the feedback provision from SOs. A basic component of the proposed incenti...
Hierarchical self-organization of tectonic plates
Morra, Gabriele; Müller, R Dietmar
2010-01-01
The Earth's surface is subdivided into eight large tectonic plates and many smaller ones. We reconstruct the plate tessellation history and demonstrate that both large and small plates display two distinct hierarchical patterns, described by different power-law size-relationships. While small plates display little organisational change through time, the structure of the large plates oscillate between minimum and maximum hierarchical tessellations. The organization of large plates rapidly changes from a weak hierarchy at 120-100 million years ago (Ma) towards a strong hierarchy, which peaked at 65-50, Ma subsequently relaxing back towards a minimum hierarchical structure. We suggest that this fluctuation reflects an alternation between top and bottom driven plate tectonics, revealing a previously undiscovered tectonic cyclicity at a timescale of 100 million years.
Towards a sustainable manufacture of hierarchical zeolites.
Verboekend, Danny; Pérez-Ramírez, Javier
2014-03-01
Hierarchical zeolites have been established as a superior type of aluminosilicate catalysts compared to their conventional (purely microporous) counterparts. An impressive array of bottom-up and top-down approaches has been developed during the last decade to design and subsequently exploit these exciting materials catalytically. However, the sustainability of the developed synthetic methods has rarely been addressed. This paper highlights important criteria to ensure the ecological and economic viability of the manufacture of hierarchical zeolites. Moreover, by using base leaching as a promising case study, we verify a variety of approaches to increase reactor productivity, recycle waste streams, prevent the combustion of organic compounds, and minimize separation efforts. By reducing their synthetic footprint, hierarchical zeolites are positioned as an integral part of sustainable chemistry. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hierarchical Neural Network Structures for Phoneme Recognition
Vasquez, Daniel; Minker, Wolfgang
2013-01-01
In this book, hierarchical structures based on neural networks are investigated for automatic speech recognition. These structures are evaluated on the phoneme recognition task where a Hybrid Hidden Markov Model/Artificial Neural Network paradigm is used. The baseline hierarchical scheme consists of two levels each which is based on a Multilayered Perceptron. Additionally, the output of the first level serves as a second level input. The computational speed of the phoneme recognizer can be substantially increased by removing redundant information still contained at the first level output. Several techniques based on temporal and phonetic criteria have been investigated to remove this redundant information. The computational time could be reduced by 57% whilst keeping the system accuracy comparable to the baseline hierarchical approach.
Universal hierarchical behavior of citation networks
Mones, Enys; Vicsek, Tamás
2014-01-01
Many of the essential features of the evolution of scientific research are imprinted in the structure of citation networks. Connections in these networks imply information about the transfer of knowledge among papers, or in other words, edges describe the impact of papers on other publications. This inherent meaning of the edges infers that citation networks can exhibit hierarchical features, that is typical of networks based on decision-making. In this paper, we investigate the hierarchical structure of citation networks consisting of papers in the same field. We find that the majority of the networks follow a universal trend towards a highly hierarchical state, and i) the various fields display differences only concerning their phase in life (distance from the "birth" of a field) or ii) the characteristic time according to which they are approaching the stationary state. We also show by a simple argument that the alterations in the behavior are related to and can be understood by the degree of specializatio...
Static and dynamic friction of hierarchical surfaces
Costagliola, Gianluca; Bosia, Federico; Pugno, Nicola M.
2016-12-01
Hierarchical structures are very common in nature, but only recently have they been systematically studied in materials science, in order to understand the specific effects they can have on the mechanical properties of various systems. Structural hierarchy provides a way to tune and optimize macroscopic mechanical properties starting from simple base constituents and new materials are nowadays designed exploiting this possibility. This can be true also in the field of tribology. In this paper we study the effect of hierarchical patterned surfaces on the static and dynamic friction coefficients of an elastic material. Our results are obtained by means of numerical simulations using a one-dimensional spring-block model, which has previously been used to investigate various aspects of friction. Despite the simplicity of the model, we highlight some possible mechanisms that explain how hierarchical structures can significantly modify the friction coefficients of a material, providing a means to achieve tunability.
Hierarchical hybrid testability modeling and evaluation method based on information fusion
Institute of Scientific and Technical Information of China (English)
Xishan Zhang; Kaoli Huang; Pengcheng Yan; Guangyao Lian
2015-01-01
In order to meet the demand of testability analysis and evaluation for complex equipment under a smal sample test in the equipment life cycle, the hierarchical hybrid testability model-ing and evaluation method (HHTME), which combines the testabi-lity structure model (TSM) with the testability Bayesian networks model (TBNM), is presented. Firstly, the testability network topo-logy of complex equipment is built by using the hierarchical hybrid testability modeling method. Secondly, the prior conditional prob-ability distribution between network nodes is determined through expert experience. Then the Bayesian method is used to update the conditional probability distribution, according to history test information, virtual simulation information and similar product in-formation. Final y, the learned hierarchical hybrid testability model (HHTM) is used to estimate the testability of equipment. Compared with the results of other modeling methods, the relative deviation of the HHTM is only 0.52%, and the evaluation result is the most accurate.
Directory of Open Access Journals (Sweden)
Satadal Saha
2011-07-01
Full Text Available An innovative hierarchical image segmentation scheme is reported in this research communication. Unlike static/ spatially divided sub-images, the current innovation concentrates on object level hierarchy for segmentation of gray scale or color images into constituent component/ sub-parts. As for example, a gray scale document image may be segmented (binarized in case of two-level segmentation into connected foreground components (text/ graphics and background component by hierarchically applying a gray level threshold selection algorithm in the object-space. In any hierarchy, constituent objects are identified as connected foreground pixels, as classified by the gray scale threshold selection algorithm. To preserve the global information, thresholds for each object in any hierarchy are estimated as a weighted aggregate of the current and previous thresholds relevant to the object. The developed technique may be customized as a general purpose hierarchical information clustering algorithm in the domain of pattern analysis, data mining, bioinformatics etc.
A Bayesian hierarchical diffusion model decomposition of performance in Approach-Avoidance Tasks.
Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan
2015-01-01
Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach-Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest.
Hierarchical control of electron-transfer
DEFF Research Database (Denmark)
Westerhoff, Hans V.; Jensen, Peter Ruhdal; Egger, Louis;
1997-01-01
In this chapter the role of electron transfer in determining the behaviour of the ATP synthesising enzyme in E. coli is analysed. It is concluded that the latter enzyme lacks control because of special properties of the electron transfer components. These properties range from absence of a strong...... back pressure by the protonmotive force on the rate of electron transfer to hierarchical regulation of the expression of the gens that encode the electron transfer proteins as a response to changes in the bioenergetic properties of the cell.The discussion uses Hierarchical Control Analysis...
Genetic Algorithm for Hierarchical Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Sajid Hussain
2007-09-01
Full Text Available Large scale wireless sensor networks (WSNs can be used for various pervasive and ubiquitous applications such as security, health-care, industry automation, agriculture, environment and habitat monitoring. As hierarchical clusters can reduce the energy consumption requirements for WSNs, we investigate intelligent techniques for cluster formation and management. A genetic algorithm (GA is used to create energy efficient clusters for data dissemination in wireless sensor networks. The simulation results show that the proposed intelligent hierarchical clustering technique can extend the network lifetime for different network deployment environments.
DC Hierarchical Control System for Microgrid Applications
Lu, Xiaonan; Sun, Kai; Guerrero, Josep M.; Huang, Lipei
2013-01-01
In order to enhance the DC side performance of AC-DC hybrid microgrid,a DC hierarchical control system is proposed in this paper.To meet the requirement of DC load sharing between the parallel power interfaces,droop method is adopted.Meanwhile,DC voltage secondary control is employed to restore the deviation in the DC bus voltage.The hierarchical control system is composed of two levels.DC voltage and AC current controllers are achieved in the primary control level.
Hierarchical social networks and information flow
López, Luis; F. F. Mendes, Jose; Sanjuán, Miguel A. F.
2002-12-01
Using a simple model for the information flow on social networks, we show that the traditional hierarchical topologies frequently used by companies and organizations, are poorly designed in terms of efficiency. Moreover, we prove that this type of structures are the result of the individual aim of monopolizing as much information as possible within the network. As the information is an appropriate measurement of centrality, we conclude that this kind of topology is so attractive for leaders, because the global influence each actor has within the network is completely determined by the hierarchical level occupied.
Analyzing security protocols in hierarchical networks
DEFF Research Database (Denmark)
Zhang, Ye; Nielson, Hanne Riis
2006-01-01
Validating security protocols is a well-known hard problem even in a simple setting of a single global network. But a real network often consists of, besides the public-accessed part, several sub-networks and thereby forms a hierarchical structure. In this paper we first present a process calculus...... capturing the characteristics of hierarchical networks and describe the behavior of protocols on such networks. We then develop a static analysis to automate the validation. Finally we demonstrate how the technique can benefit the protocol development and the design of network systems by presenting a series...
Hierarchic Models of Turbulence, Superfluidity and Superconductivity
Kaivarainen, A
2000-01-01
New models of Turbulence, Superfluidity and Superconductivity, based on new Hierarchic theory, general for liquids and solids (physics/0102086), have been proposed. CONTENTS: 1 Turbulence. General description; 2 Mesoscopic mechanism of turbulence; 3 Superfluidity. General description; 4 Mesoscopic scenario of fluidity; 5 Superfluidity as a hierarchic self-organization process; 6 Superfluidity in 3He; 7 Superconductivity: General properties of metals and semiconductors; Plasma oscillations; Cyclotron resonance; Electroconductivity; 8. Microscopic theory of superconductivity (BCS); 9. Mesoscopic scenario of superconductivity: Interpretation of experimental data in the framework of mesoscopic model of superconductivity.
Hierarchical Analysis of the Omega Ontology
Energy Technology Data Exchange (ETDEWEB)
Joslyn, Cliff A.; Paulson, Patrick R.
2009-12-01
Initial delivery for mathematical analysis of the Omega Ontology. We provide an analysis of the hierarchical structure of a version of the Omega Ontology currently in use within the US Government. After providing an initial statistical analysis of the distribution of all link types in the ontology, we then provide a detailed order theoretical analysis of each of the four main hierarchical links present. This order theoretical analysis includes the distribution of components and their properties, their parent/child and multiple inheritance structure, and the distribution of their vertical ranks.
Hierarchical self-organization of non-cooperating individuals.
Directory of Open Access Journals (Sweden)
Tamás Nepusz
Full Text Available Hierarchy is one of the most conspicuous features of numerous natural, technological and social systems. The underlying structures are typically complex and their most relevant organizational principle is the ordering of the ties among the units they are made of according to a network displaying hierarchical features. In spite of the abundant presence of hierarchy no quantitative theoretical interpretation of the origins of a multi-level, knowledge-based social network exists. Here we introduce an approach which is capable of reproducing the emergence of a multi-levelled network structure based on the plausible assumption that the individuals (representing the nodes of the network can make the right estimate about the state of their changing environment to a varying degree. Our model accounts for a fundamental feature of knowledge-based organizations: the less capable individuals tend to follow those who are better at solving the problems they all face. We find that relatively simple rules lead to hierarchical self-organization and the specific structures we obtain possess the two, perhaps most important features of complex systems: a simultaneous presence of adaptability and stability. In addition, the performance (success score of the emerging networks is significantly higher than the average expected score of the individuals without letting them copy the decisions of the others. The results of our calculations are in agreement with a related experiment and can be useful from the point of designing the optimal conditions for constructing a given complex social structure as well as understanding the hierarchical organization of such biological structures of major importance as the regulatory pathways or the dynamics of neural networks.
Probably Almost Bayes Decisions
DEFF Research Database (Denmark)
Anoulova, S.; Fischer, Paul; Poelt, S.
1996-01-01
discriminant functions for this purpose. We analyze this approach for different classes of distribution functions of Boolean features:kth order Bahadur-Lazarsfeld expansions andkth order Chow expansions. In both cases, we obtain upper bounds for the required sample size which are small polynomials...... in the relevant parameters and which match the lower bounds known for these classes. Moreover, the learning algorithms are efficient.......In this paper, we investigate the problem of classifying objects which are given by feature vectors with Boolean entries. Our aim is to "(efficiently) learn probably almost optimal classifications" from examples. A classical approach in pattern recognition uses empirical estimations of the Bayesian...
A Hierarchical Bayesian Model to Predict Self-Thinning Line for Chinese Fir in Southern China.
Directory of Open Access Journals (Sweden)
Xiongqing Zhang
Full Text Available Self-thinning is a dynamic equilibrium between forest growth and mortality at full site occupancy. Parameters of the self-thinning lines are often confounded by differences across various stand and site conditions. For overcoming the problem of hierarchical and repeated measures, we used hierarchical Bayesian method to estimate the self-thinning line. The results showed that the self-thinning line for Chinese fir (Cunninghamia lanceolata (Lamb.Hook. plantations was not sensitive to the initial planting density. The uncertainty of model predictions was mostly due to within-subject variability. The simulation precision of hierarchical Bayesian method was better than that of stochastic frontier function (SFF. Hierarchical Bayesian method provided a reasonable explanation of the impact of other variables (site quality, soil type, aspect, etc. on self-thinning line, which gave us the posterior distribution of parameters of self-thinning line. The research of self-thinning relationship could be benefit from the use of hierarchical Bayesian method.
A hierarchical Bayesian framework for force field selection in molecular dynamics simulations.
Wu, S; Angelikopoulos, P; Papadimitriou, C; Moser, R; Koumoutsakos, P
2016-02-13
We present a hierarchical Bayesian framework for the selection of force fields in molecular dynamics (MD) simulations. The framework associates the variability of the optimal parameters of the MD potentials under different environmental conditions with the corresponding variability in experimental data. The high computational cost associated with the hierarchical Bayesian framework is reduced by orders of magnitude through a parallelized Transitional Markov Chain Monte Carlo method combined with the Laplace Asymptotic Approximation. The suitability of the hierarchical approach is demonstrated by performing MD simulations with prescribed parameters to obtain data for transport coefficients under different conditions, which are then used to infer and evaluate the parameters of the MD model. We demonstrate the selection of MD models based on experimental data and verify that the hierarchical model can accurately quantify the uncertainty across experiments; improve the posterior probability density function estimation of the parameters, thus, improve predictions on future experiments; identify the most plausible force field to describe the underlying structure of a given dataset. The framework and associated software are applicable to a wide range of nanoscale simulations associated with experimental data with a hierarchical structure.
Robust Real-Time Music Transcription with a Compositional Hierarchical Model
Pesek, Matevž; Leonardis, Aleš; Marolt, Matija
2017-01-01
The paper presents a new compositional hierarchical model for robust music transcription. Its main features are unsupervised learning of a hierarchical representation of input data, transparency, which enables insights into the learned representation, as well as robustness and speed which make it suitable for real-world and real-time use. The model consists of multiple layers, each composed of a number of parts. The hierarchical nature of the model corresponds well to hierarchical structures in music. The parts in lower layers correspond to low-level concepts (e.g. tone partials), while the parts in higher layers combine lower-level representations into more complex concepts (tones, chords). The layers are learned in an unsupervised manner from music signals. Parts in each layer are compositions of parts from previous layers based on statistical co-occurrences as the driving force of the learning process. In the paper, we present the model’s structure and compare it to other hierarchical approaches in the field of music information retrieval. We evaluate the model’s performance for the multiple fundamental frequency estimation. Finally, we elaborate on extensions of the model towards other music information retrieval tasks. PMID:28046074
Poor-data and data-poor species stock assessment using a Bayesian hierarchical approach.
Jiao, Yan; Cortés, Enric; Andrews, Kate; Guo, Feng
2011-10-01
Appropriate inference for stocks or species with low-quality data (poor data) or limited data (data poor) is extremely important. Hierarchical Bayesian methods are especially applicable to small-area, small-sample-size estimation problems because they allow poor-data species to borrow strength from species with good-quality data. We used a hammerhead shark complex as an example to investigate the advantages of using hierarchical Bayesian models in assessing the status of poor-data and data-poor exploited species. The hammerhead shark complex (Sphyrna spp.) along the Atlantic and Gulf of Mexico coasts of the United States is composed of three species: the scalloped hammerhead (S. lewini), the great hammerhead (S. mokarran), and the smooth hammerhead (S. zygaena) sharks. The scalloped hammerhead comprises 70-80% of the catch and has catch and relative abundance data of good quality, whereas great and smooth hammerheads have relative abundance indices that are both limited and of low quality presumably because of low stock density and limited sampling. Four hierarchical Bayesian state-space surplus production models were developed to simulate variability in population growth rates, carrying capacity, and catchability of the species. The results from the hierarchical Bayesian models were considerably more robust than those of the nonhierarchical models. The hierarchical Bayesian approach represents an intermediate strategy between traditional models that assume different population parameters for each species and those that assume all species share identical parameters. Use of the hierarchical Bayesian approach is suggested for future hammerhead shark stock assessments and for modeling fish complexes with species-specific data, because the poor-data species can borrow strength from the species with good data, making the estimation more stable and robust.
Zhang, Yongsheng; Wei, Heng; Zheng, Kangning
2017-01-01
Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188
The Carolina Bay Restoration Project - Final Report 2000-2006.
Energy Technology Data Exchange (ETDEWEB)
Barton, Christopher
2007-12-15
A Wetlands Mitigation Bank was established at SRS in 1997 as a compensatory alternative for unavoidable wetland losses. Prior to restoration activities, 16 sites included in the project were surveyed for the SRS Site Use system to serve as a protective covenant. Pre-restoration monitoring ended in Fall 2000, and post restoration monitoring began in the Winter/Spring of 2001. The total interior harvest in the 16 bays after harvesting the trees was 19.6 ha. The margins in the opencanopy, pine savanna margin treatments were thinned. Margins containing areas with immature forested stands (bay 5184 and portions of bay 5011) were thinned using a mechanical shredder in November 2001. Over 126 hectares were included in the study areas (interior + margin). Planting of two tree species and the transplanting of wetland grass species was successful. From field surveys, it was estimated that approximately 2700 Nyssa sylvatica and 1900 Taxodium distichum seedlings were planted in the eight forested bays resulting in an average planting density of ≈ 490 stems ha-1. One hundred seedlings of each species per bay (where available) were marked to evaluate survivability and growth. Wetland grass species were transplanted from donor sites on SRS to plots that ranged in size from 100 – 300 m2, depending on wetland size. On 0.75 and 0.6 meter centers, respectively, 2198 plugs of Panicum hemitomon and 3021 plugs Leersia hexandra were transplanted. New shoots originating from the stumps were treated with a foliar herbicide (Garlon® 4) during the summer of 2001 using backpack sprayers. Preliminary information from 2000-2004 regarding the hydrologic, vegetation and faunal response to restoration is presented in this status report.
Predominant Nearshore Sediment Dispersal Patterns in Manila Bay
Directory of Open Access Journals (Sweden)
Fernando Siringan
1997-12-01
Full Text Available Net nearshore sediment drift patterns in Manila Bay were determined by combining the coastal geomorphology depicted in 1 : 50,000scale topographic maps and Synthetic Aperture Radar (SAR images, with changes in shoreline position and predominant longshore current directions derived from the interaction of locally generated waves and bay morphology.Manila Bay is fringed by a variety of coastal subenvironments that reflect changing balances of fluvial, wave, and tidal processes. Along the northern coast, a broad tidal-river delta plain stretching from Bataan to Bulacan indicates the importance of tides, where the lateral extent of tidal influences is amplified by the very gentle coastal gradients. In contrast, along the Cavite coast sandy strandplains, spits, and wave-dominated deltas attest to the geomorphic importance of waves that enter the bay from the South China Sea.The estimates of net sediment drift derived from geomorphological, shoreline-change, and meteorological information are generally in good agreement. Sediment drift directions are predominantly to the northeast along Cavite, to the northwest along Manila and Bulacan, and to the north along Bataan. Wave refraction and eddy formation at the tip of the Cavite Spit cause southwestward sediment drift along the coast from Zapote to Kawit. Geomorphology indicates that onshore-offshore sediment transport is probably more important than alongshore transport along the coast fronting the tidal delta plain of northern Manila Bay. Disagreements between the geomorphic-derived and predicted net sediment drift directions may be due to interactions of wave-generated longshore currents with wind- and tide-generated currents.
Institute of Scientific and Technical Information of China (English)
ShenYike; FeiHeliang
1999-01-01
In this article, Bayes estimation of location parameters under restriction is broughtforth. Since Bayes estimator is closely connected with the first value of order statistics that canbe observed, it is possible to consider “complete data” method, through which the pseudo-value of first order statistics and pseudo-right censored samples can he obtained. Thus the results under Type- Ⅱ right censoring can be used directly to get more accurate estimators by Bayes method.
Structured Additive Regression Models: An R Interface to BayesX
Directory of Open Access Journals (Sweden)
Nikolaus Umlauf
2015-02-01
Full Text Available Structured additive regression (STAR models provide a flexible framework for model- ing possible nonlinear effects of covariates: They contain the well established frameworks of generalized linear models and generalized additive models as special cases but also allow a wider class of effects, e.g., for geographical or spatio-temporal data, allowing for specification of complex and realistic models. BayesX is standalone software package providing software for fitting general class of STAR models. Based on a comprehensive open-source regression toolbox written in C++, BayesX uses Bayesian inference for estimating STAR models based on Markov chain Monte Carlo simulation techniques, a mixed model representation of STAR models, or stepwise regression techniques combining penalized least squares estimation with model selection. BayesX not only covers models for responses from univariate exponential families, but also models from less-standard regression situations such as models for multi-categorical responses with either ordered or unordered categories, continuous time survival data, or continuous time multi-state models. This paper presents a new fully interactive R interface to BayesX: the R package R2BayesX. With the new package, STAR models can be conveniently specified using Rs formula language (with some extended terms, fitted using the BayesX binary, represented in R with objects of suitable classes, and finally printed/summarized/plotted. This makes BayesX much more accessible to users familiar with R and adds extensive graphics capabilities for visualizing fitted STAR models. Furthermore, R2BayesX complements the already impressive capabilities for semiparametric regression in R by a comprehensive toolbox comprising in particular more complex response types and alternative inferential procedures such as simulation-based Bayesian inference.
On The Estimation of Survival Function and Parameter Exponential Life Time Distribution
Directory of Open Access Journals (Sweden)
Hadeel S. Al-Kutubi
2009-01-01
Full Text Available Problem statement: The study and research of survival or reliability or life time belong to the same area of study but they may belong to a different area of application. In survival analysis one can use several life time distribution, exponential distribution with mean life time θ is one of them. To estimate this parameter and survival function we must be used estimation procedures with less MSE and MPE. Approach: The only statistical theory that combined modeling inherent uncertainty and statistical uncertainty is Bayesian statistics. The theorem of Bayes provided a solution to how learn from data. Bayes theorem was depending on prior and posterior distribution and standard Bayes estimator depends on Jeffery prior information. In this study we annexed Jeffery prior information to get the modify Bayes estimator and then compared it with standard Bayes estimator and maximum likelihood estimator to find the best (less MSE and MPE. Results: when we derived Bayesian and Maximum likelihood of the scale parameter and survival functions. Simulation study was used to compare between estimators and Mean Square Error (MSE and Mean Percentage Error (MPE of estimators are computed. Conclusion: The new proposed estimator of modify Bayes estimator in parameter and survival function was the best estimator (less MSE and MPE when we compared it with standard Bayes and maximum likelihood estimator.
Biomass and Carbon Stocks of Sofala Bay Mangrove Forests
Directory of Open Access Journals (Sweden)
Almeida A. Sitoe
2014-08-01
Full Text Available Mangroves could be key ecosystems in strategies addressing the mitigation of climate changes through carbon storage. However, little is known regarding the carbon stocks of these ecosystems, particularly below-ground. This study was carried out in the mangrove forests of Sofala Bay, Central Mozambique, with the aim of quantifying carbon stocks of live and dead plant and soil components. The methods followed the procedures developed by the Center for International Forestry Research (CIFOR for mangrove forests. In this study, we developed a general allometric equation to estimate individual tree biomass and soil carbon content (up to 100 cm depth. We estimated the carbon in the whole mangrove ecosystem of Sofala Bay, including dead trees, wood debris, herbaceous, pneumatophores, litter and soil. The general allometric equation for live trees derived was [Above-ground tree dry weight (kg = 3.254 × exp(0.065 × DBH], root mean square error (RMSE = 4.244, and coefficient of determination (R2 = 0.89. The average total carbon storage of Sofala Bay mangrove was 218.5 Mg·ha−1, of which around 73% are stored in the soil. Mangrove conservation has the potential for REDD+ programs, especially in regions like Mozambique, which contains extensive mangrove areas with high deforestation and degradation rates.
Hierarchical Interference Mitigation for Massive MIMO Cellular Networks
Liu, An; Lau, Vincent
2014-09-01
We propose a hierarchical interference mitigation scheme for massive MIMO cellular networks. The MIMO precoder at each base station (BS) is partitioned into an inner precoder and an outer precoder. The inner precoder controls the intra-cell interference and is adaptive to local channel state information (CSI) at each BS (CSIT). The outer precoder controls the inter-cell interference and is adaptive to channel statistics. Such hierarchical precoding structure reduces the number of pilot symbols required for CSI estimation in massive MIMO downlink and is robust to the backhaul latency. We study joint optimization of the outer precoders, the user selection, and the power allocation to maximize a general concave utility which has no closed-form expression. We first apply random matrix theory to obtain an approximated problem with closed-form objective. We show that the solution of the approximated problem is asymptotically optimal with respect to the original problem as the number of antennas per BS grows large. Then using the hidden convexity of the problem, we propose an iterative algorithm to find the optimal solution for the approximated problem. We also obtain a low complexity algorithm with provable convergence. Simulations show that the proposed design has significant gain over various state-of-the-art baselines.
Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models.
Liu, Ziyue; Cappola, Anne R; Crofford, Leslie J; Guo, Wensheng
2014-01-01
The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls.
A hierarchical community occurrence model for North Carolina stream fish
Midway, S.R.; Wagner, Tyler; Tracy, B.H.
2016-01-01
The southeastern USA is home to one of the richest—and most imperiled and threatened—freshwater fish assemblages in North America. For many of these rare and threatened species, conservation efforts are often limited by a lack of data. Drawing on a unique and extensive data set spanning over 20 years, we modeled occurrence probabilities of 126 stream fish species sampled throughout North Carolina, many of which occur more broadly in the southeastern USA. Specifically, we developed species-specific occurrence probabilities from hierarchical Bayesian multispecies models that were based on common land use and land cover covariates. We also used index of biotic integrity tolerance classifications as a second level in the model hierarchy; we identify this level as informative for our work, but it is flexible for future model applications. Based on the partial-pooling property of the models, we were able to generate occurrence probabilities for many imperiled and data-poor species in addition to highlighting a considerable amount of occurrence heterogeneity that supports species-specific investigations whenever possible. Our results provide critical species-level information on many threatened and imperiled species as well as information that may assist with re-evaluation of existing management strategies, such as the use of surrogate species. Finally, we highlight the use of a relatively simple hierarchical model that can easily be generalized for similar situations in which conventional models fail to provide reliable estimates for data-poor groups.
Hierarchical machining materials and their performance
DEFF Research Database (Denmark)
Sidorenko, Daria; Loginov, Pavel; Levashov, Evgeny
2016-01-01
as nanoparticles in the binder, or polycrystalline, aggregate-like reinforcements, also at several scale levels). Such materials can ensure better productivity, efficiency, and lower costs of drilling, cutting, grinding, and other technological processes. This article reviews the main groups of hierarchical...
Hierarchical Optimization of Material and Structure
DEFF Research Database (Denmark)
Rodrigues, Helder C.; Guedes, Jose M.; Bendsøe, Martin P.
2002-01-01
This paper describes a hierarchical computational procedure for optimizing material distribution as well as the local material properties of mechanical elements. The local properties are designed using a topology design approach, leading to single scale microstructures, which may be restricted...... in various ways, based on design and manufacturing criteria. Implementation issues are also discussed and computational results illustrate the nature of the procedure....
Hierarchical structure of nanofibers by bubbfil spinning
Directory of Open Access Journals (Sweden)
Liu Chang
2015-01-01
Full Text Available A polymer bubble is easy to be broken under a small external force, various different fragments are formed, which can be produced to different morphologies of products including nanofibers and plate-like strip. Polyvinyl-alcohol/honey solution is used in the experiment to show hierarchical structure by the bubbfil spinning.
Sharing the proceeds from a hierarchical venture
DEFF Research Database (Denmark)
Hougaard, Jens Leth; Moreno-Ternero, Juan D.; Tvede, Mich;
2017-01-01
We consider the problem of distributing the proceeds generated from a joint venture in which the participating agents are hierarchically organized. We introduce and characterize a family of allocation rules where revenue ‘bubbles up’ in the hierarchy. The family is flexible enough to accommodate...
Metal oxide nanostructures with hierarchical morphology
Ren, Zhifeng; Lao, Jing Yu; Banerjee, Debasish
2007-11-13
The present invention relates generally to metal oxide materials with varied symmetrical nanostructure morphologies. In particular, the present invention provides metal oxide materials comprising one or more metallic oxides with three-dimensionally ordered nanostructural morphologies, including hierarchical morphologies. The present invention also provides methods for producing such metal oxide materials.
Hierarchical Scaling in Systems of Natural Cities
Chen, Yanguang
2016-01-01
Hierarchies can be modeled by a set of exponential functions, from which we can derive a set of power laws indicative of scaling. These scaling laws are followed by many natural and social phenomena such as cities, earthquakes, and rivers. This paper is devoted to revealing the scaling patterns in systems of natural cities by reconstructing the hierarchy with cascade structure. The cities of America, Britain, France, and Germany are taken as examples to make empirical analyses. The hierarchical scaling relations can be well fitted to the data points within the scaling ranges of the size and area of the natural cities. The size-number and area-number scaling exponents are close to 1, and the allometric scaling exponent is slightly less than 1. The results suggest that natural cities follow hierarchical scaling laws and hierarchical conservation law. Zipf's law proved to be one of the indications of the hierarchical scaling, and the primate law of city-size distribution represents a local pattern and can be mer...