Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum-entropy distributions of correlated variables with prespecified marginals.
Larralde, Hernán
2012-12-01
The problem of determining the joint probability distributions for correlated random variables with prespecified marginals is considered. When the joint distribution satisfying all the required conditions is not unique, the "most unbiased" choice corresponds to the distribution of maximum entropy. The calculation of the maximum-entropy distribution requires the solution of rather complicated nonlinear coupled integral equations, exact solutions to which are obtained for the case of Gaussian marginals; otherwise, the solution can be expressed as a perturbation around the product of the marginals if the marginal moments exist.
Marginal Solutions for the Superstring
Erler, Theodore
2007-01-01
We construct a class of analytic solutions of WZW-type open superstring field theory describing marginal deformations of a reference D-brane background. The deformations we consider are generated by on-shell vertex operators with vanishing operator products. The superstring solution exhibits an intriguing duality with the corresponding marginal solution of the {\\it bosonic} string. In particular, the superstring problem is ``dual'' to the problem of re-expressing the bosonic marginal solution in pure gauge form. This represents the first nonsingular analytic solution of open superstring field theory.
Heteroscedastic one-factor models and marginal maximum likelihood estimation
Hessen, D.J.; Dolan, C.V.
2009-01-01
In the present paper, a general class of heteroscedastic one-factor models is considered. In these models, the residual variances of the observed scores are explicitly modelled as parametric functions of the one-dimensional factor score. A marginal maximum likelihood procedure for parameter estimati
Hierarchical Maximum Margin Learning for Multi-Class Classification
Yang, Jian-Bo
2012-01-01
Due to myriads of classes, designing accurate and efficient classifiers becomes very challenging for multi-class classification. Recent research has shown that class structure learning can greatly facilitate multi-class learning. In this paper, we propose a novel method to learn the class structure for multi-class classification problems. The class structure is assumed to be a binary hierarchical tree. To learn such a tree, we propose a maximum separating margin method to determine the child nodes of any internal node. The proposed method ensures that two classgroups represented by any two sibling nodes are most separable. In the experiments, we evaluate the accuracy and efficiency of the proposed method over other multi-class classification methods on real world large-scale problems. The results show that the proposed method outperforms benchmark methods in terms of accuracy for most datasets and performs comparably with other class structure learning methods in terms of efficiency for all datasets.
Marginal Maximum Likelihood Estimation of Item Response Models in R
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model
Roberts, James S.; Thompson, Vanessa M.
2011-01-01
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
Analytic solutions for marginal deformations in open superstring field theory
Okawa, Y.
2007-04-15
We extend the calculable analytic approach to marginal deformations recently developed in open bosonic string field theory to open superstring field theory formulated by Berkovits. We construct analytic solutions to all orders in the deformation parameter when operator products made of the marginal operator and the associated superconformal primary field are regular. (orig.)
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
Maja Olsbjerg
2015-10-01
Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.
Maja Olsbjerg
2015-10-01
Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.
Bohui Zhu
2013-01-01
Full Text Available This paper presents a novel maximum margin clustering method with immune evolution (IEMMC for automatic diagnosis of electrocardiogram (ECG arrhythmias. This diagnostic system consists of signal processing, feature extraction, and the IEMMC algorithm for clustering of ECG arrhythmias. First, raw ECG signal is processed by an adaptive ECG filter based on wavelet transforms, and waveform of the ECG signal is detected; then, features are extracted from ECG signal to cluster different types of arrhythmias by the IEMMC algorithm. Three types of performance evaluation indicators are used to assess the effect of the IEMMC method for ECG arrhythmias, such as sensitivity, specificity, and accuracy. Compared with K-means and iterSVR algorithms, the IEMMC algorithm reflects better performance not only in clustering result but also in terms of global search ability and convergence ability, which proves its effectiveness for the detection of ECG arrhythmias.
Min, Rui; Cheng, Jian; Price, True; Wu, Guorong; Shen, Dinggang
2014-01-01
In order to establish the correspondences between different brains for comparison, spatial normalization based morphometric measurements have been widely used in the analysis of Alzheimer's disease (AD). In the literature, different subjects are often compared in one atlas space, which may be insufficient in revealing complex brain changes. In this paper, instead of deploying one atlas for feature extraction and classification, we propose a maximum-margin based representation learning (MMRL) method to learn the optimal representation from multiple atlases. Unlike traditional methods that perform the representation learning separately from the classification, we propose to learn the new representation jointly with the classification model, which is more powerful in discriminating AD patients from normal controls (NC). We evaluated the proposed method on the ADNI database, and achieved 90.69% for AD/NC classification and 73.69% for p-MCI/s-MCI classification.
Hu, Yong; Peng, Silong; Bi, Yiming; Tang, Liang
2012-12-21
A traditional multivariate calibration transfer method such as piecewise direct standardization (PDS) is usually applied to quantitative analysis. To make the method apply to qualitative analysis of Fourier Transform Infrared spectroscopy (FTIR), we propose an improved calibration transfer method based on the maximum margin criterion (CTMMC). The new method not only considers the spectral changes under different conditions, but also takes into account the geometric characteristics of spectra from different classes, so the transformed spectra from different classes will be separated as far as possible, and this will improve the performance of the follow-up qualitative analysis. A comparative study is provided between the proposed method CTMMC and other traditional calibration transfer methods on two data sets. Experimental results show that the proposed method can achieve better performance than previous methods.
Molecular clock fork phylogenies: closed form analytic maximum likelihood solutions.
Chor, Benny; Snir, Sagi
2004-12-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM) are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model-three-taxa, two-state characters, under a molecular clock. Quoting Ziheng Yang, who initiated the analytic approach,"this seems to be the simplest case, but has many of the conceptual and statistical complexities involved in phylogenetic estimation."In this work, we give general analytic solutions for a family of trees with four-taxa, two-state characters, under a molecular clock. The change from three to four taxa incurs a major increase in the complexity of the underlying algebraic system, and requires novel techniques and approaches. We start by presenting the general maximum likelihood problem on phylogenetic trees as a constrained optimization problem, and the resulting system of polynomial equations. In full generality, it is infeasible to solve this system, therefore specialized tools for the molecular clock case are developed. Four-taxa rooted trees have two topologies-the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). We combine the ultrametric properties of molecular clock fork trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations for the fork. We finally employ symbolic algebra software to obtain closed formanalytic solutions (expressed parametrically in the input data). In general, four-taxa trees can have multiple ML points. In contrast, we can now prove that each fork topology has a unique(local and global) ML point.
Maximum likelihood Jukes-Cantor triplets: analytic solutions.
Chor, Benny; Hendy, Michael D; Snir, Sagi
2006-03-01
Maximum likelihood (ML) is a popular method for inferring a phylogenetic tree of the evolutionary relationship of a set of taxa, from observed homologous aligned genetic sequences of the taxa. Generally, the computation of the ML tree is based on numerical methods, which in a few cases, are known to converge to a local maximum on a tree, which is suboptimal. The extent of this problem is unknown, one approach is to attempt to derive algebraic equations for the likelihood equation and find the maximum points analytically. This approach has so far only been successful in the very simplest cases, of three or four taxa under the Neyman model of evolution of two-state characters. In this paper we extend this approach, for the first time, to four-state characters, the Jukes-Cantor model under a molecular clock, on a tree T on three taxa, a rooted triple. We employ spectral methods (Hadamard conjugation) to express the likelihood function parameterized by the path-length spectrum. Taking partial derivatives, we derive a set of polynomial equations whose simultaneous solution contains all critical points of the likelihood function. Using tools of algebraic geometry (the resultant of two polynomials) in the computer algebra packages (Maple), we are able to find all turning points analytically. We then employ this method on real sequence data and obtain realistic results on the primate-rodents divergence time.
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Marginal Maximum Likelihood Estimation of a Latent Variable Model with Interaction
Cudeck, Robert; Harring, Jeffrey R.; du Toit, Stephen H. C.
2009-01-01
There has been considerable interest in nonlinear latent variable models specifying interaction between latent variables. Although it seems to be only slightly more complex than linear regression without the interaction, the model that includes a product of latent variables cannot be estimated by maximum likelihood assuming normality.…
Monaco, James Peter; Madabhushi, Anant
2011-07-01
The ability of classification systems to adjust their performance (sensitivity/specificity) is essential for tasks in which certain errors are more significant than others. For example, mislabeling cancerous lesions as benign is typically more detrimental than mislabeling benign lesions as cancerous. Unfortunately, methods for modifying the performance of Markov random field (MRF) based classifiers are noticeably absent from the literature, and thus most such systems restrict their performance to a single, static operating point (a paired sensitivity/specificity). To address this deficiency we present weighted maximum posterior marginals (WMPM) estimation, an extension of maximum posterior marginals (MPM) estimation. Whereas the MPM cost function penalizes each error equally, the WMPM cost function allows misclassifications associated with certain classes to be weighted more heavily than others. This creates a preference for specific classes, and consequently a means for adjusting classifier performance. Realizing WMPM estimation (like MPM estimation) requires estimates of the posterior marginal distributions. The most prevalent means for estimating these--proposed by Marroquin--utilizes a Markov chain Monte Carlo (MCMC) method. Though Marroquin's method (M-MCMC) yields estimates that are sufficiently accurate for MPM estimation, they are inadequate for WMPM. To more accurately estimate the posterior marginals we present an equally simple, but more effective extension of the MCMC method (E-MCMC). Assuming an identical number of iterations, E-MCMC as compared to M-MCMC yields estimates with higher fidelity, thereby 1) allowing a far greater number and diversity of operating points and 2) improving overall classifier performance. To illustrate the utility of WMPM and compare the efficacies of M-MCMC and E-MCMC, we integrate them into our MRF-based classification system for detecting cancerous glands in (whole-mount or quarter) histological sections of the prostate.
Barrows, Timothy T.; Juggins, Steve
2005-04-01
We present new last glacial maximum (LGM) sea-surface temperature (SST) maps for the oceans around Australia based on planktonic foraminifera assemblages. To provide the most reliable SST estimates we use the modern analog technique, the revised analog method, and artificial neural networks in conjunction with an expanded modern core top database. All three methods produce similar quality predictions and the root mean squared error of the consensus prediction (the average of the three) under cross-validation is only ±0.77 °C. We determine LGM SST using data from 165 cores, most of which have good age control from oxygen isotope stratigraphy and radiocarbon dates. The coldest SST occurred at 20,500±1400 cal yr BP, predating the maximum in oxygen isotope records at 18,200±1500 cal yr BP. During the LGM interval we observe cooling within the tropics of up to 4 °C in the eastern Indian Ocean, and mostly between 0 and 3 °C elsewhere along the equator. The high latitudes cooled by the greatest degree, a maximum of 7-9 °C in the southwest Pacific Ocean. Our maps improve substantially on previous attempts by making higher quality temperature estimates, using more cores, and improving age control.
Local solutions of Maximum Likelihood Estimation in Quantum State Tomography
Gonçalves, Douglas S; Lavor, Carlile; Farías, Osvaldo Jiménez; Ribeiro, P H Souto
2011-01-01
Maximum likelihood estimation is one of the most used methods in quantum state tomography, where the aim is to find the best density matrix for the description of a physical system. Results of measurements on the system should match the expected values produced by the density matrix. In some cases however, if the matrix is parameterized to ensure positivity and unit trace, the negative log-likelihood function may have several local minima. In several papers in the field, authors associate a source of errors to the possibility that most of these local minima are not global, so that optimization methods can be trapped in the wrong minimum, leading to a wrong density matrix. Here we show that, for convex negative log-likelihood functions, all local minima are global. We also show that a practical source of errors is in fact the use of optimization methods that do not have global convergence property or present numerical instabilities. The clarification of this point has important repercussion on quantum informat...
Real analytic solutions for marginal deformations in open superstring field theory
Okawa, Yuji
2007-09-01
We construct analytic solutions for marginal deformations satisfying the reality condition in open superstring field theory formulated by Berkovits when operator products made of the marginal operator and the associated superconformal primary field are regular. Our strategy is based on the recent observation by Erler that the problem of finding solutions for marginal deformations in open superstring field theory can be reduced to a problem in the bosonic theory of finding a finite gauge parameter for a certain pure-gauge configuration labeled by the parameter of the marginal deformation. We find a gauge transformation generated by a real gauge parameter which infinitesimally changes the deformation parameter and construct a finite gauge parameter by its path-ordered exponential. The resulting solution satisfies the reality condition by construction.
Real analytic solutions for marginal deformations in open superstring field theory
Okawa, Yuji
2007-01-01
We construct analytic solutions for marginal deformations satisfying the reality condition in open superstring field theory formulated by Berkovits when operator products made of the marginal operator and the associated superconformal primary field are regular. Our strategy is based on the recent observation by Erler that the problem of finding solutions for marginal deformations in open superstring field theory can be reduced to a problem in the bosonic theory of finding a finite gauge parameter for a certain pure-gauge configuration labeled by the parameter of the marginal deformation. We find a gauge transformation generated by a real gauge parameter which infinitesimally changes the deformation parameter and construct a finite gauge parameter by its path-ordered exponential. The resulting solution satisfies the reality condition by construction.
Fluid and Solute Fluxes from the Deformation Front to the Upper Slope at the Cascadia Margin
Berg, R. D.; Solomon, E. A.; Johnson, H. P.; Culling, D. P.; Harris, R. N.
2014-12-01
Fluid expulsion from accretionary convergent margins may be an important factor in global geochemical cycling and biogeochemical processes. However, the rates and distribution of fluid flow at these margins are not well known. To better understand these processes at the Cascadia margin, we collected 35 short (Mosquito fluid flow meter measurements along a transect from the deformation front to the upper slope offshore of the Washington coast as part of a coupled heat and fluid flow survey. We identified two active seep areas, one emergent at 1990 mbsl, and one long-lived at 1050 mbsl. At both sites we observed carbonate deposits several meters thick and hundreds of meters in horizontal dimension. Thermogenic hydrocarbons measured in pore waters at the long-lived seep site indicate deeply-sourced fluids originating at >80oC, likely migrating along faults. In addition, pore water solute profiles from the emergent seep site suggest active shallow circulation in the upper sediment column, with implications for the seep biological community and fluid budget of the margin. Pore fluid advection rates along the transect are used to characterize the geographic distribution and geologic controls on active fluid pathways. Pore water solute profiles from the sediment cores are integrated with the measured fluid advection rates to calculate solute fluxes out of the margin. Our transect of fluid flow and pore water chemistry measurements from the Cascadia margin will help to better understand fluid and geochemical cycling at accretionary convergent margins.
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
Probing Ionic Liquid Aqueous Solutions Using Temperature of Maximum Density Isotope Effects
Mohammad Tariq
2013-03-01
Full Text Available This work is a new development of an extensive research program that is investigating for the first time shifts in the temperature of maximum density (TMD of aqueous solutions caused by ionic liquid solutes. In the present case we have compared the shifts caused by three ionic liquid solutes with a common cation—1-ethyl-3-methylimidazolium coupled with acetate, ethylsulfate and tetracyanoborate anions—in normal and deuterated water solutions. The observed differences are discussed in terms of the nature of the corresponding anion-water interactions.
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun
2002-01-01
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
Luchko, Yuri; Povstenko, Yuriy
2012-01-01
In this paper, the one-dimensional time-fractional diffusion-wave equation with the fractional derivative of order $1 \\le \\alpha \\le 2$ is revisited. This equation interpolates between the diffusion and the wave equations that behave quite differently regarding their response to a localized disturbance: whereas the diffusion equation describes a process, where a disturbance spreads infinitely fast, the propagation speed of the disturbance is a constant for the wave equation. For the time fractional diffusion-wave equation, the propagation speed of a disturbance is infinite, but its fundamental solution possesses a maximum that disperses with a finite speed. In this paper, the fundamental solution of the Cauchy problem for the time-fractional diffusion-wave equation, its maximum location, maximum value, and other important characteristics are investigated in detail. To illustrate analytical formulas, results of numerical calculations and plots are presented. Numerical algorithms and programs used to produce pl...
Attig, J.W.; Hanson, P.R.; Rawling, J.E.; Young, A.R.; Carson, E.C.
2011-01-01
Samples for optical dating were collected to estimate the time of sediment deposition in small ice-marginal lakes in the Baraboo Hills of Wisconsin. These lakes formed high in the Baraboo Hills when drainage was blocked by the Green Bay Lobe when it was at or very near its maximum extent. Therefore, these optical ages provide control for the timing of the thinning and recession of the Green Bay Lobe from its maximum position. Sediment that accumulated in four small ice-marginal lakes was sampled and dated. Difficulties with field sampling and estimating dose rates made the interpretation of optical ages derived from samples from two of the lake basins problematic. Samples from the other two lake basins-South Bluff and Feltz basins-responded well during laboratory analysis and showed reasonably good agreement between the multiple ages produced at each site. These ages averaged 18.2. ka (n= 6) and 18.6. ka (n= 6), respectively. The optical ages from these two lake basins where we could carefully select sediment samples provide firm evidence that the Green Bay Lobe stood at or very near its maximum extent until about 18.5. ka.The persistence of ice-marginal lakes in these basins high in the Baraboo Hills indicates that the ice of the Green Bay Lobe had not experienced significant thinning near its margin prior to about 18.5. ka. These ages are the first to directly constrain the timing of the maximum extent of the Green Bay Lobe and the onset of deglaciation in the area for which the Wisconsin Glaciation was named. ?? 2011 Elsevier B.V.
Trapped and marginally trapped surfaces in Weyl-distorted Schwarzschild solutions
Pilkington, Terry; Fitzgerald, Joseph; Booth, Ivan
2011-01-01
To better understand the allowed range of black hole geometries, we study Weyl-distorted Schwarzschild solutions. They always contain trapped surfaces, a singularity and an isolated horizon and so should be understood to be (geometric) black holes. However we show that for large distortions the isolated horizon is neither a future outer trapping horizon (FOTH) nor even a marginally trapped surface: slices of the horizon cannot be infinitesimally deformed into (outer) trapped surfaces. We consider the implications of this result for popular quasilocal definitions of black holes.
Trapped and marginally trapped surfaces in Weyl-distorted Schwarzschild solutions
Pilkington, Terry; Melanson, Alexandre; Fitzgerald, Joseph [Department of Physics and Physical Oceanography, Memorial University of Newfoundland, NL A1B 3X7 (Canada); Booth, Ivan, E-mail: tpilkington@mun.ca, E-mail: ibooth@mun.ca [Department of Mathematics and Statistics, Memorial University of Newfoundland, NL A1C 5S7 (Canada)
2011-06-21
To better understand the allowed range of black hole geometries, we study Weyl-distorted Schwarzschild solutions. They always contain trapped surfaces, a singularity and an isolated horizon and so should be understood to be (geometric) black holes. However, we show that for large distortions the isolated horizon is neither a future outer trapping horizon nor even a marginally trapped surface: slices of the horizon cannot be infinitesimally deformed into (outer) trapped surfaces. We consider the implications of this result for popular quasilocal definitions of black holes.
Trapped and marginally trapped surfaces in Weyl-distorted Schwarzschild solutions
Pilkington, Terry; Melanson, Alexandre; Fitzgerald, Joseph; Booth, Ivan
2011-06-01
To better understand the allowed range of black hole geometries, we study Weyl-distorted Schwarzschild solutions. They always contain trapped surfaces, a singularity and an isolated horizon and so should be understood to be (geometric) black holes. However, we show that for large distortions the isolated horizon is neither a future outer trapping horizon nor even a marginally trapped surface: slices of the horizon cannot be infinitesimally deformed into (outer) trapped surfaces. We consider the implications of this result for popular quasilocal definitions of black holes.
Effects of electric field on the maximum electro-spinning rate of silk fibroin solutions.
Park, Bo Kyung; Um, In Chul
2017-02-01
Owing to the excellent cyto-compatibility of silk fibroin (SF) and the simple fabrication of nano-fibrous webs, electro-spun SF webs have attracted much research attention in numerous biomedical fields. Because the production rate of electro-spun webs is strongly dependent on the electro-spinning rate used, the electro-spinning rate becomes more important. In the present study, to improve the electro-spinning rate of SF solutions, various electric fields were applied during electro-spinning of SF, and its effects on the maximum electro-spinning rate of SF solution as well as diameters and molecular conformations of the electro-spun SF fibers were examined. As the electric field was increased, the maximum electro-spinning rate of the SF solution also increased. The maximum electro-spinning rate of a 13% SF solution could be increased 12×by increasing the electric field from 0.5kV/cm (0.25mL/h) to 2.5kV/cm (3.0mL/h). The dependence of the fiber diameter on the present electric field was not significant when using less-concentrated SF solutions (7-9% SF). On the other hand, at higher SF concentrations the electric field had a greater effect on the resulting fiber diameter. The electric field had a minimal effect of the molecular conformation and crystallinity index of the electro-spun SF webs. Copyright © 2016 Elsevier B.V. All rights reserved.
Mlpnp - a Real-Time Maximum Likelihood Solution to the Perspective-N Problem
Urban, S.; Leitloff, J.; Hinz, S.
2016-06-01
In this paper, a statistically optimal solution to the Perspective-n-Point (PnP) problem is presented. Many solutions to the PnP problem are geometrically optimal, but do not consider the uncertainties of the observations. In addition, it would be desirable to have an internal estimation of the accuracy of the estimated rotation and translation parameters of the camera pose. Thus, we propose a novel maximum likelihood solution to the PnP problem, that incorporates image observation uncertainties and remains real-time capable at the same time. Further, the presented method is general, as is works with 3D direction vectors instead of 2D image points and is thus able to cope with arbitrary central camera models. This is achieved by projecting (and thus reducing) the covariance matrices of the observations to the corresponding vector tangent space.
A Pseudo-Boolean Solution to the Maximum Quartet Consistency Problem
Morgado, Antonio
2008-01-01
Determining the evolutionary history of a given biological data is an important task in biological sciences. Given a set of quartet topologies over a set of taxa, the Maximum Quartet Consistency (MQC) problem consists of computing a global phylogeny that satisfies the maximum number of quartets. A number of solutions have been proposed for the MQC problem, including Dynamic Programming, Constraint Programming, and more recently Answer Set Programming (ASP). ASP is currently the most efficient approach for optimally solving the MQC problem. This paper proposes encoding the MQC problem with pseudo-Boolean (PB) constraints. The use of PB allows solving the MQC problem with efficient PB solvers, and also allows considering different modeling approaches for the MQC problem. Initial results are promising, and suggest that PB can be an effective alternative for solving the MQC problem.
Maximum Effective Hole Mathematical Model and Exact Solution for Commingled Reservoir
孙贺东; 刘磊; 周芳德; 高承泰
2003-01-01
The maximum effective hole-diameter mathematical model describing the flow of slightly compressible fluid through a commingled reservoir was solved rigorously with consideration of wellbore storage and different skin factors. The exact solutions for wellbore pressure and the production rate obtained from layer j for a well production at a constant rate from a radial drainage area with infinite and constant pressure and no flow outer boundary condition were expressed in terms of ordinary Bessel functions. These solutions were computed numerically by the Crump''s numerical inversion method and the behavior of systems was studied as a function of various reservoir parameters. The model was compared with the real wellbore radii model. The new model is numerically stable when the skin factor is positive and negative, but the real wellbore radii model is numerically stable only when the skin factor is positive.
Kelderman, Henk
1992-01-01
In this paper algorithms are described for obtaining the maximum likelihood estimates of the parameters in loglinear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual cou
Shifts in the temperature of maximum density (TMD) of ionic liquid aqueous solutions.
Tariq, M; Esperança, J M S S; Soromenho, M R C; Rebelo, L P N; Lopes, J N Canongia
2013-07-14
This work investigates for the first time shifts in the temperature of maximum density (TMD) of water caused by ionic liquid solutes. A vast amount of high-precision volumetric data--more than 6000 equilibrated (static) high-precision density determination corresponding to ∼90 distinct ionic liquid aqueous solutions of 28 different types of ionic liquid--allowed us to analyze the TMD shifts for different homologous series or similar sets of ionic solutes and explain the overall effects in terms of hydrophobic, electrostatic and hydrogen-bonding contributions. The differences between the observed TMD shifts in the -2 temperatures are discussed taking into account the different types of possible solute-water interactions that can modify the structure of the aqueous phase. The results also reveal different insights concerning the nature of the ions that constitute typical ionic liquids and are consistent with previous results that established hydrophobic and hydrophilic scales for ionic liquid ions based on their specific interactions with water and other probe molecules.
Jennerjahn, T. C.; Gesierich, K.; Schefuß, E.; Mohtadi, M.
2014-12-01
Global climate change is a mosaic of regional changes to a large extent determined by region-specific feedbacks between climate and ecosystems. At present the ocean is forming a major sink in the global carbon cycle. Organic matter (OM) storage in sediments displays large regional variations and varied over time during the Quaternary. Upwelling regions are sites of high primary productivity and major depocenters of organic carbon (OC), the least understood of which is the Indian Ocean upwelling off Indonesia. In order to reconstruct the burial and composition of OM during the Late Quaternary, we analyzed five sediment cores from the Indian Ocean continental margin off the Indonesian islands Sumatra to Flores spanning the last 20,000 years (20 kyr). Sediments were analyzed for bulk composition, stable carbon and nitrogen isotopes of OM, amino acids and hexosamines and terrestrial plant wax n-alkanes and their stable carbon isotope composition. Sedimentation rates hardly varied over time in the western part of the transect. They were slightly lower in the East during the Last Glacial Maximum (LGM) and deglaciation, but increased strongly during the Holocene. The amount and composition of OM was similar along the transect with maximum values during the deglaciation and the late Holocene. High biogenic opal covarying with OM content indicates upwelling-induced primary productivity dominated by diatoms to be a major control of OM burial in sediments in the East during the past 20 kyr. The content of labile OM was low throughout the transect during the LGM and increased during the late Holocene. The increase was stronger and the OM less degraded in the East than in the West indicating that continental margin sediments off Java and Flores were the major depocenter of OC burial along the Indian Ocean margin off SW Indonesia. Temporal variations probably resulted from changes in upwelling intensity and terrestrial inputs driven by variations in monsoon strength.
Lyapunov Functions and Solutions of the Lyapunov Matrix Equation for Marginally Stable Systems
Kliem, Wolfhard; Pommer, Christian
2000-01-01
of the Lyapunov matrix equation and characterize the set of matrices $(B, C)$ which guarantees marginal stability. The theory is applied to gyroscopic systems, to indefinite damped systems, and to circulatory systems, showing how to choose certain parameter matrices to get sufficient conditions for marginal...
Maximum-margin fuzzy classifier based on boundary%基于边界的最大间隔模糊分类器
刘忠宝; 王士同
2012-01-01
Several kinds of current boundary classification methods based on hyperplane, hypersphere or ellipsoid were studied, and a novel classification method called Maximum-margin Fuzzy Classifier (MFC) was proposed by using a space point as the classification criterion. By the proposed method, a fuzzy classified point c was chosen in the pattern space firstly, which should be as close to two classes as possible. Moreover, the angle between the two classes should be also as large as possible. Then, the testing points could be classified in terms of the maximum angular margin between c and all the training points. Finally, the applications of the MFC were popularized from two-class classification to one-class classification according to the kernel dual of MFC equivalent to the Minimum Enclosed Ball (MEB). Comparative experiments with current classification methods verify that the MFC has good classification performance and noise resistance ability and its classification accuracy has been reached 98.9%.%对利用超平面、超(椭)球等几何形状实现数据分类的基于边界的主流分类方法进行了研究,在此基础上,提出了一种将空间点作为分类依据的最大间隔模糊分类器(MFC).该方法首先在模式空间中找到一个模糊分类点c,c点距离两类样本要尽可能近且类间夹角间隔尽可能大.然后,测试样本通过c与训练样本间的最大化夹角间隔实现分类.最后,利用MFC的核化对偶式与最小包含球(MEB)的等价性,将MFC的应用范围从二类推广到单类.与主流分类方法的比较实验表明,MFC具有优良的分类性能和抗噪能力,其分类最高精度可达98.8％.
Пересада, Сергей Михайлович; Дымко, Сергей Сергеевич; Ковбаса, Сергей Николаевич
2010-01-01
The paper reports a generalized solution of indirect vector control of induction motors with maximum torque per ampere ratio. A novel indirect field-oriented torque tracking controller is designed, which guarantees asymptotic torque and flux tracking with asymptotic field orientation. Results of simulation and experimental tests illustrate important features of the control proposed.
Shen, Hua
2016-10-19
A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.
Xuan Guo
2016-01-01
Full Text Available The theoretical formula of the maximum internal forces for circular tunnel lining structure under impact loads of the underground is deduced in this paper. The internal force calculation formula under different equivalent forms of impact pseudostatic loads is obtained. Furthermore, by comparing the theoretical solution with the measured data of the top blasting model test of circular formula under different equivalent forms of impact pseudostatic loads are obtained. Furthermore, by comparing the theoretical solution with the measured data of the top blasting model test of circular tunnel, it is found that the proposed theoretical results accord with the experimental values well. The corresponding equivalent impact pseudostatic triangular load is the most realistic pattern of all test equivalent forms. The equivalent impact pseudostatic load model and maximum solution of the internal force for tunnel lining structure are partially verified.
An Exact Solution Approach for the Maximum Multicommodity K-splittable Flow Problem
Gamst, Mette; Petersen, Bjørn
2009-01-01
This talk concerns the NP-hard Maximum Multicommodity k-splittable Flow Problem (MMCkFP) in which each commodity may use at most k paths between its origin and its destination. A new branch-and-cut-and-price algorithm is presented. The master problem is a two-index formulation of the MMCk......FP and the pricing problem is the shortest path problem with forbidden paths. A new branching strategy forcing and forbidding the use of certain paths is developed. The new branch-and-cut-and-price algorithm is computationally evaluated and compared to results from the literature. The new algorithm shows very...
A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution
Piotrowski, Edward W.; Sładkowski, Jan
2009-03-01
The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a
Maximum precision closed-form solution for localizing diffraction-limited spots in noisy images.
Larkin, Joshua D; Cook, Peter R
2012-07-30
Super-resolution techniques like PALM and STORM require accurate localization of single fluorophores detected using a CCD. Popular localization algorithms inefficiently assume each photon registered by a pixel can only come from an area in the specimen corresponding to that pixel (not from neighboring areas), before iteratively (slowly) fitting a Gaussian to pixel intensity; they fail with noisy images. We present an alternative; a probability distribution extending over many pixels is assigned to each photon, and independent distributions are joined to describe emitter location. We compare algorithms, and recommend which serves best under different conditions. At low signal-to-noise ratios, ours is 2-fold more precise than others, and 2 orders of magnitude faster; at high ratios, it closely approximates the maximum likelihood estimate.
Hu, Hanlin
2015-06-17
The solution-processing of conjugated polymers, just like commodity polymers, is subject to solvent and molecular weight-dependent solubility, interactions and chain entanglements within the polymer, all of which can influence the crystallization and microstructure development in semi-crystalline polymers and consequently affect charge transport and optoelectronic properties. Disentanglement of polymer chains in marginal solvents was reported to work via ultrasonication, facilitating the formation of photophysically ordered polymer aggregates. In this contribution, we explore how a wide range of technologically relevant solvents and formulations commonly used in organic electronics influence chain entanglement and the aggregation behaviour of P3HT using a combination of rheological and spectrophotometric measurements. The specific viscosity of the solution offers an excellent indication of the degree of entanglements in the solution, which is found to be related to the solubility of P3HT in a given solvent. Moreover, deliberately disentangling the solution in the presence of solvophobic driving forces, leads consistently to formation of photophysically visible aggregates which is indicative of local and perhaps long range order in the solute. We show for a broad range of solvents and molecular weights that disentanglement ultimately leads to significant ordering of the polymer in the solid state and a commensurate increase in charge transport properties. In doing so we demonstrate a remarkable ability to tune the microstructure which has important implications for transport properties. We discuss its potential implications in the context of organic photovoltaics.
Kiourtsis, Fotios; Keramitzis, Dimitris; Papatheodorou, Ioannis; Tsoulakaki, Dimitra; Gontzaridou, Marina; Lampetsou, Eugenia; Fragkiskakis, Nikitas; Gerwin, Werner; Repmann, Frank; Baumgarten, Wibke
2017-04-01
In 2016, D.A.M.T, the Hellenic Forest Service for northern Greece (Macedonia and Thrace Regions), with the support of BTU Cottbus-Senftenberg Reseach Center Landscape Development and Mining Landscapes experts and following common standard protocols of the SEEMLA project, established three plots, in the northeastern part of Greece, in Rodopi prefecture (main forest species for biomass production: Pinus Nigra, Pinus Brutia and Robinia Pseudacacia). Nearby productive ecosystems (including forests etc.) or successional sites will be used as references for estimating the potentials of MagL. Further existing plantations of energy crops on similar MagL, will be used to assess potential crop yields. These plots represent different types of marginal lands, they were specifically selected for SEEMLA purposes (reliable and sustainable exploitation of biomass) and are entirely different from other inventories, used for typical forest operations in Greece. The main differences are: an intensively studied core area, Soil Quality Rating (SQR) method measurements, Soil Classification Maps - parameters estimation (land capability classes and landforms), tightly spaced plantations (1,5 m x 1,5 m), cropping systems, shorter rotations and the need for special forest management study. The combination of these requirements with the soil conditions of the area has created significant issues on plots establishment and accurate recording of supply chain stages. Main expected SEEMLA impacts are: • provide a substantial amount of EU energy needs from marginal/degraded land, • avoidance of land use conflicts by strengthening the ability to use MagL for biomass production for energy, • reduction of EU-wide greenhouse gas, • mitigation of conflicts regarding sustainability and biodiversity for the utilization of MagL for biomass production, • growth of plantations of bioenergy carriers from MagL at competitive costs, • expansion of economic opportunities
RJD A Cost Effective Frackless Solution For Production Enhancement In Marginal Fields
Ahmed Kamel
2015-08-01
Full Text Available With the worldwide trend of low oil prices high maturity of oil fields excessive cost of horizontal and fracking technologies and necessity for green drilling applications radial jet drilling RJD technology can be a cost effective and environmentally-friendly alternative. RJD is an unconventional drilling technique that utilizes coiled tubing conveyed tools and the energy of high velocity jet fluids to drill laterals inside the reservoir. In recent years rapid advances in high pressure water jet technology has tremendously increased its application in oil and gas industry not only in drilling operations to improve drilling rate and reduce drilling cost but also in production to maximize hydrocarbon recovery. In addition RJD can be used to bypass near wellbore damage direct reservoir treatmentsinjections improve water disposal and re-injection rates and assist in steam or CO2 treatments. This paper highlights the theoretical basis technological advancement procedures applications and challenges of high pressure water jets. Several worldwide case studies are discussed to evaluate the success results pros and cons of RJD. The results show that nearly an average of four to five fold production increase can be obtained. The present paper clearly shows that radial jet drilling is a viable and attractive alternative in marginal and small reservoirs that still have significant oil in place to capture the benefits of horizontal drillingfracking and to improve productivity from both new wells andor workover wells that cannot be produced with the existing expensive conventional completions.
P. C. Treble
2017-06-01
Full Text Available Terrestrial data spanning the Last Glacial Maximum (LGM and deglaciation from the southern Australian region are sparse and limited to discontinuous sedimentological and geomorphological records with relatively large chronological uncertainties. This dearth of records has hindered a critical assessment of the role of the Southern Hemisphere mid-latitude westerly winds on the region's climate during this time period. In this study, two precisely dated speleothem records for Mairs Cave, Flinders Ranges, are presented, providing for the first time a detailed terrestrial hydroclimatic record for the southern Australian drylands during 23–15 ka. Recharge to Mairs Cave is interpreted from the speleothem record by the activation of growth, physical flood layering, and δ18O and δ13C minima. Periods of lowered recharge are indicated by 18O and 13C enrichment, primarily affecting δ18O, argued to be driven by evaporation of shallow soil/epikarst water in this water-limited environment. A hydrological driver is supported by calcite fabric changes. These include the presence of laminae, visible organic colloids, and occasional dissolution features, related to recharge, as well as the presence of sediment bands representing cave floor flooding. A shift to slower-growing, more compact calcite and an absence of lamination is interpreted to represent reduced recharge. The Mairs Cave record indicates that the Flinders Ranges were relatively wet during the LGM and early deglaciation, particularly over the interval 18.9–15.8 ka. This wetter phase ended abruptly with a shift to drier conditions at 15.8 ka. These findings are in agreement with the geomorphic archives for this region, as well as the timing of events in records from the broader Australasian region. The recharge phases identified in the Mairs Cave record are correlated with, but antiphase to, the position of the westerly winds interpreted from marine core MD03-2611, located 550 km south
F.F. de Toledo
1994-12-01
Full Text Available Realizou-se o presente trabalho com o objetivo de estudar a germinação de sementes de cultivares de Panicum maximum Jacq. (Colonião, Tobiatã, Centenário, Centauro e Tanzânia, em testes sobre substratos umedecidos com três quantidades de solução de nitrato de potássio a 0,2%, quais sejam: 12, 16 e 20 ml. Depois de semeados, os gerboxes foram fechados com fita crepe e levados para germinador. As contagens de germinação foram realizadas de 7 em 7 dias, sem se acrescentar água, e os gerboxes mantidos vedados. Os resultados permitiram concluir que houve diferença entre os testes, dependendo da quantidade de solução utilizada, tendo a germinação com 12ml superado a das demais.This work was carried out to study the germination of Panicum maximum Jacq. seeds, Colonião, Tobiatã, Centenário, Centauro and Tanzânia varieties, on substrata moistened with three different volumes of 0.2% potassium nitrate solution: 12, 16 and 20ml. After sowing, the gerbox covers were tied with tape and no water was added. Germination counts were done each 7 days. The germination test results pointed out that there were differences among them, depending on the amount of solution applied. The germination percentage with 12 ml of solution applied to the substrata was greater than for the others.
X. Gong
2014-06-01
Full Text Available A bell-shape vertical profile of chlorophyll a (Chl a concentration, conventionally referred as Subsurface Chlorophyll Maximum (SCM phenomenon, has frequently been observed in stratified oceans and lakes. This profile is assumed to be a general Gaussian distribution in this study. By substituting the general Gaussian function into ecosystem dynamical equations, the steady-state solutions for SCM characteristics (i.e. SCM layer depth, thickness, and intensity in various scenarios are derived. These solutions indicate that: (1 The maximum in Chl a concentrations occurs at or below the depth with the maximum in growth rates of phytoplankton locating at the transition from nutrient limitation to light limitation, and the depth of SCM layer deepens logarithmically with an increase in surface light intensity; (2 The shape of SCM layer (thickness and intensity is mainly influenced by nutrient supply, but independence of surface light intensity; (3 The intensity of SCM layer is proportional to the diffusive flux of nutrient from below, getting stronger as a result of this layer being shrank by a higher light attenuation coefficient or a larger sinking velocity of phytoplankton. The analytical solutions can be useful to estimate environmental parameters difficultly obtained from on-site observations.
FENG Guolin; DONG Wenjie; GAO Hongxing
2005-01-01
The time-dependent solution of reduced air-sea coupling stochastic-dynamic model is accurately obtained by using the Fokker-Planck equation and the quantum mechanical method. The analysis of the timedependent solution suggests that when the climate system is in the ground state, the behavior of the system appears to be a Brownian movement, thus reasoning the foothold of Hasselmann's stochastic climatic model;when the system is in the first excitation state, the motion of the system exhibits a form of time-decaying,or under certain condition a periodic oscillation with the main period being 2.3 yr. At last, the results are used to discuss the impact of the doubling of carbon dioxide on climate.
刘忠宝; 王士同
2011-01-01
In order to circumvent the deficiencies of Support Vector Machine (SVM) and its improved algorithms, this paper presents Maximum-margin Learning Machine based on Entropy concept and Kernel density estimation (MLMEK). In MLMEK, data distributions in samples are represented by kernel density estimation and classification uncertainties are represented by entropy. MLMEK takes boundary data between classes and inner data in each class seriously, so it performs better than traditional SVM. MLMEK can work for two-class and one-class pattern classification. Experimental results obtained from UCI data sets verify that the algorithms proposed in the paper is effective and competitive.%该文针对支持向量机(SVM)及其变种的不足,提出一种基于熵理论和核密度估计的最大间隔学习机MLMEK.MLMEK引入了核密度估计和熵的概念,用核密度估计表征样本数据的分布特征,用熵表征分类的不确定性.MLMEK真实反映样本数据的分布特征；同时解决两类分类问题和单类分类问题；比传统SVM具有更好的分类性能.UCI数据集上的实验验证了MLMEK的有效性.
Hardware trojan detection method based on maximum margin criterion%基于最大间距准则的硬件木马检测方法研究
李雄伟; 王晓晗; 张阳; 徐璐
2016-01-01
针对硬件木马检测问题，分析了功耗旁路信号的统计特性，建立了木马检测问题的物理模型。在此基础上，提出了一种基于功耗旁路信号的硬件木马检测方法，该方法利用最大间距准则（MMC）处理旁路信号，构建体现基准芯片与木马芯片旁路信号之间最大差异的投影子空间，通过比较投影之间的差异检测集成电路芯片中的硬件木马；采用物理实验对该方法进行了验证，通过在现场可编程门阵列（FPGA ）芯片上实现的高级加密标准（A ES ）加密电路中植入不同规模的木马电路，分别采集功耗旁路信号（各1000条样本），并利用MMC方法对样本信号进行处理。实验结果表明：MMC方法能有效分辨出基准芯片与木马芯片之间旁路信号的统计特征差异，实现了硬件木马的检测。该方法与Karhunen‐Loève（K‐L ）变换方法相比，有较好的检测效果。%Aimed at detecting hardware trojans ,the statistical properties of power side channel signals were analyzed ,a geometric model about trojan detection problem was established ,and on this basis ,a hardware trojan detection method based on power side channel signals was proposed .This method used maximum margin criterion to process power side channel signals ,built the projection subspace which reflects the biggest difference between reference chip and trojan chip ,and detected the hardware trojan in the integrated circuit chip through comparing the difference of projections .The detection method was verified by physics experiment .Through implanted different sizes trojan circuits in ad‐vanced encryption standard (AES) encryption circuit which was implemented on field programmable gate array (FPGA) chip ,power side channel signals (1 000 samples of each trojan circuit) were col‐lected ,which were processed by maximum margin criterion .The results show that ,the method can effectively distinguish the
Maximum Margin Fuzzy Classifier with N-S Magnetic Pole Effect%具有N-S磁极效应的最大间隔模糊分类器
刘忠宝; 裴松年; 杨秋翔
2016-01-01
该文提出一种具有N-S磁极效应的最大间隔模糊分类器(MPMMFC)。该方法寻求一个具有N-S磁极效应的最优超平面，使得一类样本受磁极吸引离超平面尽可能近，另一类样本受磁极排斥离超平面尽可能远。针对传统支持向量机面临的对噪声和野点敏感问题，引入模糊技术来降低噪声和野点对分类的影响，从而进一步提高泛化性能和分类效率。通过人工数据集和实际数据集上的实验，证明了MPMMFC的有效性。%Inspired by space geometry and magnetic pole effect theory, a maximum margin fuzzy classifier with N-S magnetic pole (MPMMFC) is proposed in this paper. The main idea is to find an optimal hyperplane based on N-S magnetic pole effect in order to ensure that the distance between one class and the hyperplane is much closer due to pole attractive and the distance between the other class and the hyperplane is much greater due to repulsion. Moreover, due to the traditional support vector machine (SVM) sensitive to noises and outliers, a fuzzy technology is introduced in this paper to reduce the influence of noises and outliers, and the classification efficiencies and generalization performance are improved further. Experimental results on the synthetic datasets and UCI datasets show that the proposed approaches are effective.
Practical Marginalized Multilevel Models.
Griswold, Michael E; Swihart, Bruce J; Caffo, Brian S; Zeger, Scott L
2013-01-01
Clustered data analysis is characterized by the need to describe both systematic variation in a mean model and cluster-dependent random variation in an association model. Marginalized multilevel models embrace the robustness and interpretations of a marginal mean model, while retaining the likelihood inference capabilities and flexible dependence structures of a conditional association model. Although there has been increasing recognition of the attractiveness of marginalized multilevel models, there has been a gap in their practical application arising from a lack of readily available estimation procedures. We extend the marginalized multilevel model to allow for nonlinear functions in both the mean and association aspects. We then formulate marginal models through conditional specifications to facilitate estimation with mixed model computational solutions already in place. We illustrate the MMM and approximate MMM approaches on a cerebrovascular deficiency crossover trial using SAS and an epidemiological study on race and visual impairment using R. Datasets, SAS and R code are included as supplemental materials.
The Solution to Maximum Value of a Special Class of Trigonometric Functions%一类特殊三角函数的最大值解
周桂如
2015-01-01
首先在△ABC中，给出特定系数3sinA+4sinB+18sinC的最大值问题，分别利用逐步分析法、拉格朗日乘数法和不等式三种方法获得相同的结果，然后利用拉格朗日乘数法推导出任意系数三角函数asinA+bsinB+csinC（其中a、b、c∈R+)的最大值求解方法，最后推导三角函数acosA+bcosB+ccosC（其中a、b、c∈R+)的极值。%For the maximum value of a trigonometric function 3sinA+4sinB+18sinC with special coefficients in △ ABC , three solutions of analysis segment, the Lagrange multiplier, the inequalities, are first proposed, leading to the same result. Then for a general trigonometric function asinA+bsinB+csinC with the coefficients a,b, and c belonging to ,the Lagrange multiplier is used to seek its maximum value. Finally, the solution to the extreme value of the trigonometric function acosA+bcosB+ccosC with the coefficients , b, and c belonging to is derived.
A robust conditional approximation of marginal tail probabilities.
Brazzale, A. R.; Ventura, L.
2001-01-01
The aim of this contribution is to derive a robust approximate conditional procedure used to eliminate nuisance parameters in regression and scale models. Unlike the approximations to exact conditional solutions based on the likelihood function and on the maximum likelihood estimator, the robust conditional approximation of marginal tail probabilities does not suffer from lack of robustness to model misspecification. To assess the performance of the proposed robust conditional procedure the r...
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Blandamer, MJ; Buurma, NJ; Engberts, JBFN; Reis, JCR; Buurma, Niklaas J.; Reis, João C.R.
2003-01-01
At temperatures above and below the temperature of maximum density, TMD, for water at ambient pressure, pairs of temperatures exist at which the molar volumes of water are equal. First-order rate constants for the pH-independent hydrolysis of 1-benzoyl-1,2,4-triazole in aqueous solution at pairs of
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Margin requirements, margin loans, and margin rates: practice and principles
Peter Fortune
2000-01-01
The Board of Governors of the Federal Reserve System establishes initial margin requirements under Regulations T, U, and X. Recent margin loan increases, both in aggregate value and relative to market capitalization, have rekindled the debate about using margin requirements as an instrument to affect the prices of common stocks. Proponents of a more active margin requirement policy see the regulations as instruments for affecting the level and volatility of stock prices by influencing investo...
Metin I Eren
Full Text Available BACKGROUND: Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. METHODOLOGY/PRINCIPAL FINDINGS: Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known. We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. CONCLUSIONS/SIGNIFICANCE: Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes, or census information (e.g. nationality, religion, age, race.
Maximum Entropy and Probability Kinematics Constrained by Conditionals
Stefan Lukits
2015-03-01
Full Text Available Two open questions of inductive reasoning are solved: (1 does the principle of maximum entropy (PME give a solution to the obverse Majerník problem; and (2 isWagner correct when he claims that Jeffrey’s updating principle (JUP contradicts PME? Majerník shows that PME provides unique and plausible marginal probabilities, given conditional probabilities. The obverse problem posed here is whether PME also provides such conditional probabilities, given certain marginal probabilities. The theorem developed to solve the obverse Majerník problem demonstrates that in the special case introduced by Wagner PME does not contradict JUP, but elegantly generalizes it and offers a more integrated approach to probability updating.
Marginal maximum likelihood estimation in polytomous Rasch models using SAS
Christensen, Karl Bang; Olsbjerg, Maja
2013-01-01
Dansk radiohistorie er på mange måder et uskrevet kapitel. Selvom der findes flere udgivelser fra Statsradiofonien selv og en dansk mediehistorie i fire bind, henligger mange centrale problemstillinger endnu i historiens mørke. Det vil nærværende afhandling søge at råde bod på med et fokus på tid...
Maria de Hoyos Guajardo, Ph.D. Candidate, M.Sc., B.Eng.
2004-11-01
Full Text Available The theory that is presented below aims to conceptualise how a group of undergraduate students tackle non-routine mathematical problems during a problem-solving course. The aim of the course is to allow students to experience mathematics as a creative process and to reflect on their own experience. During the course, students are required to produce a written ‘rubric’ of their work, i.e., to document their thoughts as they occur as well as their emotionsduring the process. These ‘rubrics’ were used as the main source of data.Students’ problem-solving processes can be explained as a three-stage process that has been called ‘solutioning’. This process is presented in the six sections below. The first three refer to a common area of concern that can be called‘generating knowledge’. In this way, generating knowledge also includes issues related to ‘key ideas’ and ‘gaining understanding’. The third and the fourth sections refer to ‘generating’ and ‘validating a solution’, respectively. Finally, once solutions are generated and validated, students usually try to improve them further before presenting them as final results. Thus, the last section deals with‘improving a solution’. Although not all students go through all of the stages, it may be said that ‘solutioning’ considers students’ main concerns as they tackle non-routine mathematical problems.
Microleakage of Four Dental Cements in Metal Ceramic Restorations With Open Margins.
Eftekhar Ashtiani, Reza; Farzaneh, Babak; Azarsina, Mohadese; Aghdashi, Farzad; Dehghani, Nima; Afshari, Aisooda; Mahshid, Minu
2015-11-01
Fixed prosthodontics is a routine dental treatment and microleakage is a major cause of its failure. The aim of this study was to assess the marginal microleakage of four cements in metal ceramic restorations with adapted and open margins. Sixty sound human premolars were selected for this experimental study performed in Tehran, Iran and prepared for full-crown restorations. Wax patterns were formed leaving a 300 µm gap on one of the proximal margins. The crowns were cast and the samples were randomly divided into four groups based on the cement used. Copings were cemented using zinc phosphate cement (Fleck), Fuji Plus resin-modified glass ionomer, Panavia F2.0 resin cement, or G-Cem resin cement, according to the manufacturers' instructions. Samples were immersed in 2% methylene blue solution. After 24 hours, dye penetration was assessed under a stereomicroscope and analyzed using the respective software. Data were analyzed using ANOVA, paired t-tests, and Kruskal-Wallis, Wilcoxon, and Mann-Whitney tests. The least microleakage occurred in the Panavia F2.0 group (closed margin, 0.18 mm; open margin, 0.64 mm) and the maximum was observed in the Fleck group (closed margin, 1.92 mm; open margin, 3.32 mm). The Fleck group displayed significantly more microleakage compared to the Fuji Plus and Panavia F2.0 groups (P ceramic restorations, clinicians should try to minimize marginal gaps in order to reduce restoration failure. In situations where there are doubts about perfect marginal adaptation, the use of Fuji Plus cement may be helpful.
Caldarelli, Stefano; Catalano, Donata; Di Bari, Lorenzo; Lumetti, Marco; Ciofalo, Maurizio; Alberto Veracini, Carlo
1994-07-01
The dipolar couplings observed by NMR spectroscopy of solutes in nematic solvents (LX-NMR) are used to build up the maximum entropy (ME) probability distribution function of the variables describing the orientational and internal motion of the molecule. The ME conformational distributions of 2,2'- and 3,3'-dithiophene and 2,2':5',2″-terthiophene (α-terthienyl)thus obtained are compared with the results of previous studies. The 2,2'- and 3,3'-dithiophene molecules exhibit equilibria among cisoid and transoid forms; the probability maxima correspond to planar and twisted conformers for 2,2'- or 3,3'-dithiophene, respectively, 2,2':5',2″-Terthiophene has two internal degrees of freedom; the ME approach indicates that the trans, trans and cis, trans planar conformations are the most probable. The correlation between the two intramolecular rotations is also discussed.
The Role of Weight Shrinking in Large Margin Perceptron Learning
Panagiotakopoulos, Constantinos
2012-01-01
We introduce into the classical perceptron algorithm with margin a mechanism that shrinks the current weight vector as a first step of the update. If the shrinking factor is constant the resulting algorithm may be regarded as a margin-error-driven version of NORMA with constant learning rate. In this case we show that the allowed strength of shrinking depends on the value of the maximum margin. We also consider variable shrinking factors for which there is no such dependence. In both cases we obtain new generalizations of the perceptron with margin able to provably attain in a finite number of steps any desirable approximation of the maximal margin hyperplane. The new approximate maximum margin classifiers appear experimentally to be very competitive in 2-norm soft margin tasks involving linear kernels.
Grigoriev, I. S.; Grigoriev, K. G.
2003-05-01
The necessary first-order conditions of strong local optimality (conditions of maximum principle) are considered for the problems of optimal control over a set of dynamic systems. To derive them a method is suggested based on the Lagrange principle of removing constraints in the problems on a conditional extremum in a functional space. An algorithm of conversion from the problem of optimal control of an aggregate of dynamic systems to a multipoint boundary value problem is suggested for a set of systems of ordinary differential equations with the complete set of conditions necessary for its solution. An example of application of the methods and algorithm proposed is considered: the solution of the problem of constructing the trajectories of a spacecraft flight at a constant altitude above a preset area (or above a preset point) of a planet's surface in a vacuum (for a planet with atmosphere beyond the atmosphere). The spacecraft is launched from a certain circular orbit of a planet's satellite. This orbit is to be determined (optimized). Then the satellite is injected to the desired trajectory segment (or desired point) of a flyby above the planet's surface at a specified altitude. After the flyby the satellite is returned to the initial circular orbit. A method is proposed of correct accounting for constraints imposed on overload (mixed restrictions of inequality type) and on the distance from the planet center: extended (nonpointlike) intermediate (phase) restrictions of the equality type.
无
2000-01-01
Based on the Chinese mainland GPS network (1994～1996), Fujian GPS network (1995～1997), cross fault deformation network (1982～1998), precise leveling network (1973～1980) and focal mechanism solutions of the recent several tens years, we synthetically and quantitatively studied the present-time crustal motion of the southeast coast of Chinese mainland-Fujian and its marginal sea. We find that this area with its mainland together moves toward SE with a rather constant velocity of 11.2(3.0 mm/a. At the same time, there is a motion from the Quanzhou bay pointing to hinterland, with a major orientation of NW, extending toward two sides, and with an average velocity of 3.0(2.6 mm/a. The faults orienting NE show compressing motions, and the ones orienting NW show extending motions. The present-time strain field derived from crustal deformation is consistent with seismic stress field derived from the focal mechanism solutions and the tectonic stress field derived from geology data. The principal stress of compression orients NW (NWW) - SE (SEE). Demarcated by the NW orienting faults of the Quanzhou bay and Jinjiang-Yongan, the crustal motions show regional characteristics: the southwest of Fujian and the boundary of Fujian and Guangdong are areas of rising, the northeast of Fujian are areas of sinking. The horizontal strain rate and the fault motion of the former are both greater than the later. The side-transferring motion of Hymalaya collision zone and the compression of the west pacific subduction zone affect the motion of the research area. The amount of motion affected by the former is larger than the later, but the former is homogeneous and the later is not, which indicates that the events of strong earthquakes in this region relate more directly with western pacific subduction zone.
Jensen, Niels Rosendal
2009-01-01
The article is based on a key note speach in Bielefeld on the subject "welfare state and marginalized youth", focusing upon the high ambition of expanding schooling in Denmark from 9 to 12 years. The unintended effect may be a new kind of marginalization.......The article is based on a key note speach in Bielefeld on the subject "welfare state and marginalized youth", focusing upon the high ambition of expanding schooling in Denmark from 9 to 12 years. The unintended effect may be a new kind of marginalization....
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
NONE
1990-12-31
The Department of Energy (DOE) is announcing the refocusing of its marine research program to emphasize the study of ocean margins and their role in modulating, controlling, and driving Global Change phenomena. This is a proposal to conduct a workshop that will establish priorities and an implementation plan for a new research initiative by the Department of Energy on the ocean margins. The workshop will be attended by about 70 scientists who specialize in ocean margin research. The workshop will be held in the Norfolk, Virginia area in late June 1990.
"We call ourselves marginalized"
Jørgensen, Nanna Jordt
2014-01-01
In recent decades, indigenous knowledge has been added to the environmental education agenda in an attempt to address the marginalization of non-western perspectives. While these efforts are necessary, the debate is often framed in terms of a discourse of victimization that overlooks the agency o...... argue that researchers not only need to pay attention to how certain voices are marginalized in Environmental Education research and practice, but also to how learners as agents respond to, use and negotiate the marginalization of their perspectives....
Naqvi, S.W.A.
The most important biogeochemical transformations and boundary exchanges in the Indian Ocean seem to occur in the northern region, where the processes originating at the land-ocean boundary extend far beyond the continental margins. Exchanges across...
Learning unbelievable marginal probabilities
Pitkow, Xaq; Miller, Ken D
2011-01-01
Loopy belief propagation performs approximate inference on graphical models with loops. One might hope to compensate for the approximation by adjusting model parameters. Learning algorithms for this purpose have been explored previously, and the claim has been made that every set of locally consistent marginals can arise from belief propagation run on a graphical model. On the contrary, here we show that many probability distributions have marginals that cannot be reached by belief propagation using any set of model parameters or any learning algorithm. We call such marginals `unbelievable.' This problem occurs whenever the Hessian of the Bethe free energy is not positive-definite at the target marginals. All learning algorithms for belief propagation necessarily fail in these cases, producing beliefs or sets of beliefs that may even be worse than the pre-learning approximation. We then show that averaging inaccurate beliefs, each obtained from belief propagation using model parameters perturbed about some le...
Distribution of maximum loss of fractional Brownian motion with drift
Çağlar, Mine; Vardar-Acar, Ceren
2013-01-01
In this paper, we find bounds on the distribution of the maximum loss of fractional Brownian motion with H >= 1/2 and derive estimates on its tail probability. Asymptotically, the tail of the distribution of maximum loss over [0, t] behaves like the tail of the marginal distribution at time t.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Jensen, Sune Qvotrup
2010-01-01
This article analyses how young marginalized ethnic minority men in Denmark react to the othering they are subject to in the media as well as in the social arenas of every day life. The article is based on theoretically informed ethnographic fieldwork among such young men as well as interviews an...
Parker, Noel
2009-01-01
upon Deleuze's philosophy to set out an ontology in which the continual reformulation of entities in play in ‘post-international' society can be grasped. This entails a strategic shift from speaking about the ‘borders' between sovereign states to referring instead to the ‘margins' between a plethora...
Marginally Deformed Starobinsky Gravity
Codello, A.; Joergensen, J.; Sannino, Francesco
2015-01-01
We show that quantum-induced marginal deformations of the Starobinsky gravitational action of the form $R^{2(1 -\\alpha)}$, with $R$ the Ricci scalar and $\\alpha$ a positive parameter, smaller than one half, can account for the recent experimental observations by BICEP2 of primordial tensor modes....
"We call ourselves marginalized"
Jørgensen, Nanna Jordt
2014-01-01
In recent decades, indigenous knowledge has been added to the environmental education agenda in an attempt to address the marginalization of non-western perspectives. While these efforts are necessary, the debate is often framed in terms of a discourse of victimization that overlooks the agency o...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Jensen, Sune Qvotrup
2010-01-01
of critique although in a masculinist way. These reactions to othering represent a challenge to researchers interested in intersectionality and gender, because gender is reproduced as a hierarchical form of social differentiation at the same time as racism is both reproduced and resisted.......This article analyses how young marginalized ethnic minority men in Denmark react to the othering they are subject to in the media as well as in the social arenas of every day life. The article is based on theoretically informed ethnographic fieldwork among such young men as well as interviews...... and other types of material. Taking the concepts of othering, intersectionality and marginality as point of departure the article analyses how these young men experience othering and how they react to it. One type of reaction, described as stylization, relies on accentuating the latently positive symbolic...
Actively stressed marginal networks
Sheinman, M; MacKintosh, F C
2012-01-01
We study the effects of motor-generated stresses in disordered three dimensional fiber networks using a combination of a mean-field, effective medium theory, scaling analysis and a computational model. We find that motor activity controls the elasticity in an anomalous fashion close to the point of marginal stability by coupling to critical network fluctuations. We also show that motor stresses can stabilize initially floppy networks, extending the range of critical behavior to a broad regime of network connectivities below the marginal point. Away from this regime, or at high stress, motors give rise to a linear increase in stiffness with stress. Finally, we demonstrate that our results are captured by a simple, constitutive scaling relation highlighting the important role of non-affine strain fluctuations as a susceptibility to motor stress.
Actively stressed marginal networks.
Sheinman, M; Broedersz, C P; MacKintosh, F C
2012-12-07
We study the effects of motor-generated stresses in disordered three-dimensional fiber networks using a combination of a mean-field theory, scaling analysis, and a computational model. We find that motor activity controls the elasticity in an anomalous fashion close to the point of marginal stability by coupling to critical network fluctuations. We also show that motor stresses can stabilize initially floppy networks, extending the range of critical behavior to a broad regime of network connectivities below the marginal point. Away from this regime, or at high stress, motors give rise to a linear increase in stiffness with stress. Finally, we demonstrate that our results are captured by a simple, constitutive scaling relation highlighting the important role of nonaffine strain fluctuations as a susceptibility to motor stress.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Stochastic margin-based structure learning of Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael
2013-02-01
The margin criterion for parameter learning in graphical models gained significant impact over the last years. We use the maximum margin score for discriminatively optimizing the structure of Bayesian network classifiers. Furthermore, greedy hill-climbing and simulated annealing search heuristics are applied to determine the classifier structures. In the experiments, we demonstrate the advantages of maximum margin optimized Bayesian network structures in terms of classification performance compared to traditionally used discriminative structure learning methods. Stochastic simulated annealing requires less score evaluations than greedy heuristics. Additionally, we compare generative and discriminative parameter learning on both generatively and discriminatively structured Bayesian network classifiers. Margin-optimized Bayesian network classifiers achieve similar classification performance as support vector machines. Moreover, missing feature values during classification can be handled by discriminatively optimized Bayesian network classifiers, a case where purely discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Iberian Atlantic Margins Group investigates deep structure of ocean margins
The Iberian Atlantic Margins Group; Banda, Enric; Torne, Montserrat
With recent seismic reflection data in hand, investigators for the Iberian Atlantic Margins project are preparing images of the deep continental and oceanic margins of Iberia. In 1993, the IAM group collected near vertical incidence seismic reflection data over a total distance of 3500 km along the North and Western Iberian Margins, Gorringe Bank Region and Gulf of Cadiz (Figure 1). When combined with data on the conjugate margin off Canada, details of the Iberian margin's deep structure should aid in distinguishing rift models and improve understanding of the processes governing the formation of margins.The North Iberian passive continental margin was formed during a Permian to Triassic phase of extension and matured during the early Cretaceous by rotation of the Iberian Peninsula with respect to Eurasia. From the late Cretaceous to the early Oligocene period, Iberia rotated in a counterclockwise direction around an axis located west of Lisbon. The plate boundary between Iberia and Eurasia, which lies along the Pyrenees, follows the north Spanish marginal trough, trends obliquely in the direction of the fossil Bay of Biscay triple junction, and continues along the Azores-Biscay Rise [Sibuet et al., 1994]. Following the NE-SW convergence of Iberia and Eurasia, the reactivation of the North Iberian continental margin resulted in the formation of a marginal trough and accretionary prism [Boillot et al., 1971].
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
The Seismicity of Two Hyperextended Margins
Redfield, Tim; Terje Osmundsen, Per
2013-04-01
, loads generated by escarpment erosion, offshore sedimentary deposition, and post-glacial rebound have been periodically superimposed throughout the Neogene. Their vertical stress patterns are mutually-reinforcing during deglaciation. However, compared to the post-glacial dome the pattern of maximum uplift/unloading generated by escarpment erosion will be longer, more linear, and located atop the emergent proximal margin. The pattern of offshore maximum deposition/loading will be similar. This may help explain the asymmetric expenditure of Fennoscandia's annual seismic energy budget. It may also help explain the obvious Conundrum: if stress generated by erosion and deposition is sufficiently great, fault reactivation and consequent seismicity can occur at any hyperextended passive margin sector regardless of its glacial history. Onshore Scandinavia, episodic footwall uplift and escarpment rejuvenation may have been driven by just such a mechanism throughout much of the later Cretaceous and Cenozoic. SE Brasil offers a glimpse of how Norway's hyperextended margin might manifest itself seismically in the absence of post-glacial rebound. Compilations suggest two seismic belts may exist. One, offshore, follows the thinned crust of the ultra-deep, hyperextended Campos and Santos basins. Onshore, earthquakes occur more commonly in the elevated highlands of the escarpments, and track especially the long, linear ranges such as the Serra de Mantiquiera and Serra do Espinhaço. Seismicity is more rare in the coastal lowlands, and largely absent in the Brasilian hinterland. Although never glaciated since the time of hyperextension and characterized by significantly fewer earthquakes in toto, SE Brasil's pattern of seismicity closely mimics Scandinavia. Commencing after perhaps just a few tens of millions of years of 'sag' basin infill, accommodation phase fault reactivation and footwall uplift at passive margins is the inexorable product of hyperextension. CITATIONS Redfield, T
Marginally Stable Nuclear Burning
Strohmayer, Tod E.; Altamirano, D.
2012-01-01
Thermonuclear X-ray bursts result from unstable nuclear burning of the material accreted on neutron stars in some low mass X-ray binaries (LMXBs). Theory predicts that close to the boundary of stability oscillatory burning can occur. This marginally stable regime has so far been identified in only a small number of sources. We present Rossi X-ray Timing Explorer (RXTE) observations of the bursting, high-inclination LMXB 4U 1323-619 that reveal for the first time in this source the signature of marginally stable burning. The source was observed during two successive RXTE orbits for approximately 5 ksec beginning at 10:14:01 UTC on March 28, 2011. Significant mHz quasi-periodic oscillations (QPO) at a frequency of 8.1 mHz are detected for approximately 1600 s from the beginning of the observation until the occurrence of a thermonuclear X-ray burst at 10:42:22 UTC. The mHz oscillations are not detected following the X-ray burst. The average fractional rms amplitude of the mHz QPOs is 6.4% (3 - 20 keV), and the amplitude increases to about 8% below 10 keV.This phenomenology is strikingly similar to that seen in the LMXB 4U 1636-53. Indeed, the frequency of the mHz QPOs in 4U 1323-619 prior to the X-ray burst is very similar to the transition frequency between mHz QPO and bursts found in 4U 1636-53 by Altamirano et al. (2008). These results strongly suggest that the observed QPOs in 4U 1323-619 are, like those in 4U 1636-53, due to marginally stable nuclear burning. We also explore the dependence of the energy spectrum on the oscillation phase, and we place the present observations within the context of the spectral evolution of the accretion-powered flux from the source.
陈丽萍
2014-01-01
高职语文课程是增强学生人文知识与人文精神，培养学生的社会责任感、道德感和使命感的有效载体。当前，高职院校语文课程开设渐趋边缘化，语文课程设置目标的内涵与外延模糊，难以担当人文素质养成教育的重任。要使其尽快摆脱边缘化的状态，应从加强教材建设入手，强调与提升语文的教化功能，同时打造高职语文精品课程，并借助多种教学文化活动来推动学习语文风气的形成。%University language courses is to enhance students' knowledge of humanities and humanistic spirit, student’s effective carrier of social responsibility, morality and sense of mission. Currently, the language courses at vocational colleges is becoming marginalized, language curriculum connotation and denotation vague goals, it is difficult to play the important task to develop humanistic quality education. To get rid of it as soon as marginalized status, should strengthen the construction of teaching materials to start, emphasize and enhance the language of enlightenment function, but to build vocational language courses, and with the variety of teaching and cultural activities to promote the formation of language learning ethos.
Gaussian quantum marginal problem
Eisert, J; Sanders, B C; Tyc, T
2007-01-01
The quantum marginal problem asks what local spectra are consistent with a given state of a composite quantum system. This setting, also referred to as the question of the compatibility of local spectra, has several applications in quantum information theory. Here, we introduce the analogue of this statement for Gaussian states for any number of modes, and solve it in generality, for pure and mixed states, both concerning necessary and sufficient conditions. Formally, our result can be viewed as an analogue of the Sing-Thompson Theorem (respectively Horn's Lemma), characterizing the relationship between main diagonal elements and singular values of a complex matrix: We find necessary and sufficient conditions for vectors (d1, ..., dn) and (c1, ..., cn) to be the symplectic eigenvalues and symplectic main diagonal elements of a strictly positive real matrix, respectively. More physically speaking, this result determines what local temperatures or entropies are consistent with a pure or mixed Gaussian state of ...
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Hengesh, J. V.; Whitney, B. B.
2016-05-01
Australia's northwestern passive margin intersects the eastern termination of the Java trench segment of the Sunda arc subduction zone and the western termination of Timor trough along the Banda arc tectonic collision zone. Differential relative motion between the Sunda arc subduction zone and the Banda arc collision zone has reactivated the former rifted margin of northwestern Australia evidenced by Pliocene to Quaternary age deformation along a 1400 km long offshore fault system. The fault system has higher rates of seismicity than the adjacent nonextended crustal terranes, has produced the largest historical earthquake in Australia (1941 ML 7.3 Meeberrie event), and is dominated by focal mechanism solutions consistent with dextral motion along northeast trending fault planes. The faults crosscut late Miocene unconformities that are eroded across middle Miocene inversion structures suggesting multiple phases of Neogene and younger fault reactivation. Onset of deformation is consistent with the timing of the collision of the Scott Plateau part of the passive continental margin with the former Banda trench between 3.0 Ma and present. The range of estimated maximum horizontal slip rates across the zone is ~1.4 to 2.6 mm yr-1, at the threshold of geodetically detectable motion, yet significant with respect to an intraplate tectonic setting. The folding and faulting along this part of the continental margin provides an example of intraplate deformation resulting from kinematic transitions along a distant plate boundary and demonstrates the presence of a youthful evolving intraplate fault system within the Indo-Australian plate.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Workers' marginal costs of commuting
van Ommeren, Jos; Fosgerau, Mogens
2009-01-01
This paper applies a dynamic search model to estimate workers' marginal costs of commuting, including monetary and time costs. Using data on workers' job search activity as well as moving behaviour, for the Netherlands, we provide evidence that, on average, workers' marginal costs of one hour...
Bioeconomic Sustainability of Cellulosic Biofuel Production on Marginal Lands
Gutierrez, Andrew Paul; Ponti, Luigi
2009-01-01
The use of marginal land (ML) for lignocellulosic biofuel production is examined for system stability, resilience, and eco-social sustainability. A North American prairie grass system and its industrialization for maximum biomass production using biotechnology and agro-technical inputs is the focus of the analysis. Demographic models of ML biomass…
Bioeconomic Sustainability of Cellulosic Biofuel Production on Marginal Lands
Gutierrez, Andrew Paul; Ponti, Luigi
2009-01-01
The use of marginal land (ML) for lignocellulosic biofuel production is examined for system stability, resilience, and eco-social sustainability. A North American prairie grass system and its industrialization for maximum biomass production using biotechnology and agro-technical inputs is the focus of the analysis. Demographic models of ML biomass…
Massad, Tariq; Jarvet, Jueri [Stockholm University, Department of Biochemistry and Biophysics (Sweden); Tanner, Risto [National Institute of Chemical Physics and Biophysics (Estonia); Tomson, Katrin; Smirnova, Julia; Palumaa, Peep [Tallinn Technical University, Inst. of Gene Technology (Estonia); Sugai, Mariko; Kohno, Toshiyuki [Mitsubishi Kagaku Institute of Life Sciences (MITILS) (Japan); Vanatalu, Kalju [Tallinn Technical University, Inst. of Gene Technology (Estonia); Damberg, Peter [Stockholm University, Department of Biochemistry and Biophysics (Sweden)], E-mail: peter.damberg@dbb.su.se
2007-06-15
In this paper, we present a new method for structure determination of flexible 'random-coil' peptides. A numerical method is described, where the experimentally measured {sup 3}J{sup H{sup N}}{sup H{sup {alpha}}} and {sup 3}J{sup H{sup {alpha}}}{sup N{sup I}+1} couplings, which depend on the {phi} and {psi} dihedral angles, are analyzed jointly with the information from a coil-library through a maximum entropy approach. The coil-library is the distribution of dihedral angles found outside the elements of the secondary structure in the high-resolution protein structures. The method results in residue specific joint {phi},{psi}-distribution functions, which are in agreement with the experimental J-couplings and minimally committal to the information in the coil-library. The 22-residue human peptide hormone motilin, uniformly {sup 15}N-labeled was studied. The {sup 3}J{sup H{sup {alpha}}}{sup N{sup I}+1} were measured from the E.COSY pattern in the sequential NOESY cross-peaks. By employing homodecoupling and an in-phase/anti-phase filter, sharp H{sup {alpha}}-resonances (about 5 Hz) were obtained enabling accurate determination of the coupling with minimal spectral overlap. Clear trends in the resulting {phi},{psi}-distribution functions along the sequence are observed, with a nascent helical structure in the central part of the peptide and more extended conformations of the receptor binding N-terminus as the most prominent characteristics. From the {phi},{psi}-distribution functions, the contribution from each residue to the thermodynamic entropy, i.e., the segmental entropies, are calculated and compared to segmental entropies estimated from {sup 15}N-relaxation data. Remarkable agreement between the relaxation and J-couplings based methods is found. Residues belonging to the nascent helix and the C-terminus show segmental entropies, of approximately -20 J K{sup -1} mol{sup -1} and -12 J K{sup -1} mol{sup -1}, respectively, in both series. The agreement
Massad, Tariq; Jarvet, Jüri; Tanner, Risto; Tomson, Katrin; Smirnova, Julia; Palumaa, Peep; Sugai, Mariko; Kohno, Toshiyuki; Vanatalu, Kalju; Damberg, Peter
2007-06-01
In this paper, we present a new method for structure determination of flexible "random-coil" peptides. A numerical method is described, where the experimentally measured 3J(H(alpha)Nalpha) and [3J(H(alpha)Nalpha+1 couplings, which depend on the phi and psi dihedral angles, are analyzed jointly with the information from a coil-library through a maximum entropy approach. The coil-library is the distribution of dihedral angles found outside the elements of the secondary structure in the high-resolution protein structures. The method results in residue specific joint phi,psi-distribution functions, which are in agreement with the experimental J-couplings and minimally committal to the information in the coil-library. The 22-residue human peptide hormone motilin, uniformly 15N-labeled was studied. The 3J(H(alpha)-N(i+1)) were measured from the E.COSY pattern in the sequential NOESY cross-peaks. By employing homodecoupling and an in-phase/anti-phase filter, sharp H(alpha)-resonances (about 5 Hz) were obtained enabling accurate determination of the coupling with minimal spectral overlap. Clear trends in the resulting phi,psi-distribution functions along the sequence are observed, with a nascent helical structure in the central part of the peptide and more extended conformations of the receptor binding N-terminus as the most prominent characteristics. From the phi,psi-distribution functions, the contribution from each residue to the thermodynamic entropy, i.e., the segmental entropies, are calculated and compared to segmental entropies estimated from 15N-relaxation data. Remarkable agreement between the relaxation and J-couplings based methods is found. Residues belonging to the nascent helix and the C-terminus show segmental entropies, of approximately -20 J K(-1) mol(-1) and -12 J K(-1) mol(-1), respectively, in both series. The agreement between the two estimates of the segmental entropy, the agreement with the observed J-couplings, the agreement with the CD experiments
Estimating Marginal Returns to Education
Carneiro, Pedro; Heckman, James J.; Vytlacil, Edward
2011-01-01
This paper estimates marginal returns to college for individuals induced to enroll in college by different marginal policy changes. The recent instrumental variables literature seeks to estimate this parameter, but in general it does so only under strong assumptions that are tested and found wanting. We show how to utilize economic theory and local instrumental variables estimators to estimate the effect of marginal policy changes. Our empirical analysis shows that returns are higher for individuals with values of unobservables that make them more likely to attend college. We contrast our estimates with IV estimates of the return to schooling. PMID:25110355
Maximum Principle for Nonlinear Cooperative Elliptic Systems on IR N
LEADI Liamidi; MARCOS Aboubacar
2011-01-01
We investigate in this work necessary and sufficient conditions for having a Maximum Principle for a cooperative elliptic system on the whole (IR)N.Moreover,we prove the existence of solutions by an approximation method for the considered system.
Marginal grafts increase early mortality in liver transplantation
Telesforo Bacchella
Full Text Available CONTEXT AND OBJECTIVE: Expanded donor criteria (marginal grafts are an important solution for organ shortage. Nevertheless, they raise an ethical dilemma because they may increase the risk of transplant failure. This study compares the outcomes from marginal and non-marginal graft transplantation in 103 cases of liver transplantation due to chronic hepatic failure. DESIGN AND SETTING: One hundred and three consecutive liver transplantations to treat chronic liver disease performed in the Liver Transplantation Service of Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo between January 2001 and March 2006 were retrospectively analyzed. METHODS: We estimated graft quality according to a validated scoring system. We assessed the pre-transplantation liver disease category using the Model for End-Stage Liver Disease (MELD, as low MELD ( 20. The parameters for marginal and non-marginal graft comparison were the one-week, one-month and one-year recipient survival rates, serum liver enzyme peak, post-transplantation hospital stay and incidence of surgical complications and retransplantation. The significance level was 0.05. RESULTS: There were no differences between the groups regarding post-transplantation hospital stay, serum liver enzyme levels and surgical complications. In contrast, marginal grafts decreased overall recipient survival one month after transplantation. Furthermore, low-MELD recipients of non-marginal grafts showed better one-week and one-month survival than did high-MELD recipients of marginal livers. After the first month, patient survival was comparable in all groups up to one year. CONCLUSION: The use of marginal graft increases early mortality in liver transplantation, particularly among high-MELD recipients.
Intraoperative ultrasound control of surgical margins during partial nephrectomy.
Alharbi, Feras M; Chahwan, Charles K; Le Gal, Sophie G; Guleryuz, Kerem M; Tillou, Xavier P; Doerfler, Arnaud P
2016-01-01
To evaluate a simple and fast technique to ensure negative surgical margins on partial nephrectomies, while correlating margin statuses with the final pathology report. This study was conducted for patients undergoing partial nephrectomy (PN) with T1-T2 renal tumors from January 2010 to the end of December 2015. Before tumor removal, intraoperative ultrasound (US) localization was performed. After tumor removal and before performing hemostasis of the kidney, the specimens were placed in a saline solution and a US was performed to evaluate if the tumor's capsule were intact, and then compared to the final pathology results. In 177 PN(s) (147 open procedures and 30 laparoscopic procedures) were performed on 147 patients. Arterial clamping was done for 32 patients and the mean warm ischemia time was 19 ± 6 min. The mean US examination time was 41 ± 7 s. The US analysis of surgical margins was negative in 172 cases, positive in four, and in only one case it was not possible to conclude. The final pathology results revealed one false positive surgical margin and one false negative surgical margin, while all other margins were in concert with US results. The mean tumor size was 3.53 ± 1.43 cm, and the mean surgical margin was 2.8 ± 1.5 mm. The intraoperative US control of resection margins in PN is a simple, efficient, and effective method for ensuring negative surgical margins with a small increase in warm ischemia time and can be conducted by the operating urologist.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations.
Parker, S. D.
2016-12-01
The kinematic evolution of the eastern Snake River Plain (ESRP) remains highly contested. A lack of strike-slip faults bounding the ESRP serves as a primary assumption in many leading kinematic models. Recent GPS geodesy has highlighted possible shear zones along the ESRP yet regional strike-slip faults remain unidentified. Oblique movement within dense arrays of high-angle conjugate normal faults, paralleling the ESRP, occur within a discrete zone of 50 km on both margins of the ESRP. These features have long been attributed to progressive crustal flexure and subsidence within the ESRP, but are capable of accommodating the observed strain without necessitating large scale strike-slip faults. Deformation features within an extensive Neogene conglomerate provide field evidence for dextral shear in a transtensional system along the northern margin of the ESRP. Pressure-solution pits and cobble striations provide evidence for a horizontal ENE/WSW maximum principal stress orientation, consistent with the hypothesis of a dextral Centennial shear zone. Fold hinges, erosional surfaces and stratigraphic datums plunging perpendicular into the ESRP have been attributed to crustal flexure and subsidence of the ESRP. Similar Quaternary folds plunge obliquely into the ESRP along its margins where diminishing offset along active normal faults trends into linear volcanic features. In all cases, orientations and distributions of plunging fold structures display a correlation to the terminus of active Basin and Range faults and linear volcanic features of the ESRP. An alternative kinematic model, rooted in kinematic disparities between Basin and Range faults and parallelling volcanic features may explain the observed downwarping as well as provide a mechanism for the observed shear along the margins of the ESRP. By integrating field observations with seismic, geodetic and geomorphic observations this study attempts to decipher the signatures of crustal flexure and shear along the
[Sinaloa: the geography of marginalization].
Aguayo Hernandez, J R
1993-01-01
Sinaloa's State Population Program for 1993-98 contains the objective of promoting integration of demographic criteria into the planning process. The action program calls for establishing indicators of economic and social inequality so that conditions of poverty and margination can be identified. To further these goals, the State Population Council used data from the National Population Council project on regional inequality and municipal margination in Mexico to analyze margination at the state level. Nine indicators of educational status, housing conditions, spatial distribution, and income provide information that allows the definition of municipios and regions that should receive priority in economic and social development programs. The index of municipal margination (IMM) is a statistical summary of the nine indicators, which are based on information in the 1990 census. As of March 1990, 9.9% of Sinaloa's population over age 15 was illiterate and 37.4% had incomplete primary education. 91.0% had electricity, but 18.7% lacked indoor toilet facilities and 19.4% had no piped water. 23.7% of houses had dirt floors. 60% of households were crowded, defined as having more than two persons per bedroom. 43.5% of the state population lived in localities with fewer than 5000 inhabitants, where service delivery is difficult and costly. 55.6% of the economically active population was judged to earn less than the amount needed to satisfy essential needs. All except one municipio bordering the Pacific ocean had low or very low indicators of margination, while all those in the sierra had a medium or high degree of margination. Sinaloa's statewide IMM was eighteenth among Mexico's 32 federal entities, with Chiapas showing the highest degree of margination and the Federal District the lowest.
A Family of Maximum SNR Filters for Noise Reduction
Huang, Gongping; Benesty, Jacob; Long, Tao;
2014-01-01
This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....
D. Molenaar; S. van der Sluis; D.I. Boomsma; C.V. Dolan
2012-01-01
Considerable effort has been devoted to the analysis of genotype by environment (G × E) interactions in various phenotypic domains, such as cognitive abilities and personality. In many studies, environmental variables were observed (measured) variables. In case of an unmeasured environment, van der
AN INVERSE MAXIMUM CAPACITY PATH PROBLEM WITH LOWER BOUND CONSTRAINTS
杨超; 陈学旗
2002-01-01
The computational complexity of inverse mimimum capacity path problem with lower bound on capacity of maximum capacity path is examined, and it is proved that solution of this problem is NP-complete. A strong polynomial algorithm for a local optimal solution is provided.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Profit margins in Japanese retailing
J.C.A. Potjes; A.R. Thurik (Roy)
1993-01-01
textabstractUsing a rich data source, we explain differences and developments in profit margins of medium-sized stores in Japan. We conclude that the protected environment enables the retailer to pass on all operating costs to the customers and to obtain a relatively high basic income. High service
Profit margins in Japanese retailing
J.C.A. Potjes; A.R. Thurik (Roy)
1993-01-01
textabstractUsing a rich data source, we explain differences and developments in profit margins of medium-sized stores in Japan. We conclude that the protected environment enables the retailer to pass on all operating costs to the customers and to obtain a relatively high basic income. High service
Respiration in ocean margin sediments
Andersson, J.H.
2007-01-01
The aim of this thesis was the study of respiration in ocean margin sediments and the assessments of tools needed for this purpose. The first study was on the biological pump and global respiration patterns in the deep ocean using an empirical model based on sediment oxygen consumption data. In this
Respiration in ocean margin sediments
Andersson, J.H.
2007-01-01
The aim of this thesis was the study of respiration in ocean margin sediments and the assessments of tools needed for this purpose. The first study was on the biological pump and global respiration patterns in the deep ocean using an empirical model based on sediment oxygen consumption data.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Confinement margins for ignition and driven operation in Iter Eda ID
Johner, J.
1995-09-01
Preliminary calculations for ITER EDA ID have been performed using the 1/2D thermal equilibrium code HELIOS. It is found that: - The maximum ignition margin for ITER ID (29%) is 6% less than for ITER OD (35%) and 5% less than for ITER CDA (34%). - Decreasing the ration {tau}{sup *}{sub He}/{tau}{sub E} from the nominal value 10 to a value of 5 gives a 12% gain in the maximum ignition margin. Increasing the ration from 10 to 15 causes a 22% loss in the margin. Furthermore, ignited equilibria non longer exist for {tau}{sup *}{sub He}/{tau}{sub E} {>=} 17.6. - Operation in driven mode with 50 MW of external power increases the confinement capability by 13%. With 100 MW, the improvement is 24%. - Lowering the fusion power from 1500 to 1000 MW slightly improves the maximum ignition margin (+5%) and allows operation below the Greenwald density limit. - A 10% reduction of the toroidal magnetic field with a correlative diminution of the plasma current for constant safety factor operation, causes a dramatic reduction (-18%) of the maximum ignition margin. - A fraction of neon of 0.68% would completely suppress the ignition margin. Furthermore, ignited equilibria, with the nominal fusion power and {tau}{sup *}{sub He}/{tau}{sub E}, no longer exist when the neon fraction exceeds 0.75%. (Author). 2 refs., 10 figs.
The Margins of Medieval Manuscripts
Nataša Kavčič
2011-12-01
Full Text Available Shortly after the mid-thirteenth century, various images began to fill the margins in both religious and secular texts. Many factors influenced the emergence of this type of manuscript decoration, but it has generally been attributed to the revived interest in nature and the Gothic inclination for humorous and anecdotic detail. After highlighting other possible reasons for the occurrence of marginal illumination, this paper introduces two manuscripts from the Archiepiscopal Archives in Ljubljana. The manuscripts show numerous facial drawings affixed to some of the letters. This article addresses how to interpret such drawings and stresses that they do not necessarily function as symbolic images or images with any specific didactic value. Quite the opposite, these drawings seem not to have any meaning and are oft en merely indications of an illuminator’s sense of humor. Because of their exaggerated facial expressions, these drawings could be perceived as the true predecessors of modern caricature.
Respiration in ocean margin sediments
Andersson, J.H.
2007-01-01
The aim of this thesis was the study of respiration in ocean margin sediments and the assessments of tools needed for this purpose. The first study was on the biological pump and global respiration patterns in the deep ocean using an empirical model based on sediment oxygen consumption data. In this thesis the depth dependence of respiration patterns was modelled using a compiled data set of sediment oxygen consumption rates. We showed that the depth relationship can best be described by a do...
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Margin Requirements and Equity Option Returns
Hitzemann, Steffen; Hofmann, Michael; Uhrig-Homburg, Marliese
In equity option markets, traders face margin requirements both for the options themselves and for hedging-related positions in the underlying stock market. We show that these requirements carry a significant margin premium in the cross-section of equity option returns. The sign of the margin pre...
Marginality and Problem Solving Effectiveness in Broadcast Search
Jeppesen, Lars Bo; Lakhani, Karim R
2009-01-01
We examine who the winners are in science problem solving contests characterized by open broadcast of problem information, self-selection of external solvers to discrete problems from the laboratories of large R&D intensive companies and blind review of solution submissions. Analyzing a unique dataset of 166 science challenges involving over 12,000 scientists revealed that technical and social marginality, being a source of different perspectives and heuristics, plays an important role in...
THE EFFECTS OF CHANGING MARGIN LEVELS ON FUTURES OPTIONS PRICE
Yanling GU; Juan LI
2006-01-01
The paper studies the effects of changing margin levels on the price of futures options and how to organize a market maker's position. Black model (1976) becomes a special case of this paper.The paper prices futures options by duplicating them and adopting the theory of Backward Stochastic Differential Equations (BSDEs for short). Furthermore, the price of a futures option is the unique solution to a nonlinear BSDE.
Biomass energy and marginal areas
Chassany, J.P.
1984-01-01
The aim of this study was to analyze the conditions and effects of a possible development of the biomass energy upgrading in uneconomical or not rentable areas. The physical, social and economical characteristics of these regions (in France) are described; then the different types of biomass are presented (agricultural wastes, energetic cultures, forest and land products and residues, food processing effluents, municipal wastes) as well as the various energy process (production of alcohol, methane, thermochemical processes, vegetable oils). The development and the feasability of these processes in marginal areas are finally analyzed taking into account the accessibility of the biomass and the technical and commercial impacts.
Extracting volatility signal using maximum a posteriori estimation
Neto, David
2016-11-01
This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Marginal Deformations with U(1)^3 Global Symmetry
Ahn, C; Ahn, Changhyun; Vazquez-Poritz, Justin F.
2005-01-01
We generate new 11-dimensional supergravity solutions from deformations based on U(1)^3 symmetries. The initial geometries are of the form AdS_4 x Y_7, where Y_7 is a 7-dimensional Sasaki-Einstein space. We consider a general family of cohomogeneity one Sasaki-Einstein spaces, as well as the recently-constructed cohomogeneity three L^{p,q,r,s} spaces. For certain cases, such as when the Sasaki-Einstein space is S^7, Q^{1,1,1} or M^{1,1,1}, the deformed gravity solutions correspond to a marginal deformation of a known dual gauge theory.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Dynamics of the continental margins
1990-11-01
On 18--20 June 1990, over 70 oceanographers conducting research in the ocean margins of North America attended a workshop in Virginia Beach, Virginia. The purpose of the workshop was to provide the Department of Energy with recommendations for future research on the exchange of energy-related materials between the coastal and interior ocean and the relationship between the ocean margins and global change. The workshop was designed to optimize the interaction of scientists from specific research disciplines (biology, chemistry, physics and geology) as they developed hypotheses, research questions and topics and implementation plans. The participants were given few restraints on the research they proposed other than realistic time and monetary limits. The interdisciplinary structure of the meeting promoted lively discussion and creative research plans. The meeting was divided into four working groups based on lateral, vertical, air/sea and sediment/water processes. Working papers were prepared and distributed before the meeting. During the meeting the groups revised the papers and added recommendations that appear in this report, which was reviewed by an Executive Committee.
Mean-Variance Portfolio Selection with Margin Requirements
Yuan Zhou
2013-01-01
Full Text Available We study the continuous-time mean-variance portfolio selection problem in the situation when investors must pay margin for short selling. The problem is essentially a nonlinear stochastic optimal control problem because the coefficients of positive and negative parts of control variables are different. We can not apply the results of stochastic linearquadratic (LQ problem. Also the solution of corresponding Hamilton-Jacobi-Bellman (HJB equation is not smooth. Li et al. (2002 studied the case when short selling is prohibited; therefore they only need to consider the positive part of control variables, whereas we need to handle both the positive part and the negative part of control variables. The main difficulty is that the positive part and the negative part are not independent. The previous results are not directly applicable. By decomposing the problem into several subproblems we figure out the solutions of HJB equation in two disjoint regions and then prove it is the viscosity solution of HJB equation. Finally we formulate solution of optimal portfolio and the efficient frontier. We also present two examples showing how different margin rates affect the optimal solutions and the efficient frontier.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
DRAGAN CVETKOVIC
2007-03-01
Full Text Available The aim of this work was to study the anticipated antioxidant role of four selected carotenoids in mixtures with lecithin lipoidal compounds in hexane solution, under continuous UV-irradiation in three different ranges (UV-A, UV-B and UV-C. Two carotenes (b-carotene and licopene and two xantophylls (lutein and neoxanthin were employed to control the lipid peroxidation process generated by UV-irradiation, by scavenging the involved free radicals. The results show that while carotenoids undergo a substantial, structural dependent destruction (bleaching, which is highly dependent on energy of the UV-photons, their contribution to the expected suppression of lecithin peroxidation is of marginal importance, not exceeding a maximum of 20%. The marginal antioxidant behaviour has been attributed to a highly unordered hexane solution, where the scavenging action of the carotenoids becomes less competitive.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Brune, Sascha; Heine, Christian; Pérez-Gussinyé, Marta; Sobolev, Stephan
2014-05-01
The architecture of magma-poor continental margins is remarkably variable. The width of highly thinned continental crust (with a thickness material; (2) Formation of a low viscosity exhumation channel adjacent to the rift centre that is generated by heat transfer from the upwelling mantle and enhanced by viscous strain softening. Rift migration takes place in a steady-state manner and is accomplished by oceanward-younging sequential faults within the upper crust and balanced through lower crustal flow. We demonstrate that the rate of extension has paramount control on margin width. Since higher velocities lead to elevated heat flow within the rift and hence to hot and weak lower crust, a larger low-viscosity exhumation channel is generated that facilitates rift migration leading to wider margins. The South Atlantic is an ideal test bed for the hypothesis of velocity-dependent margin width since rifting was fast in the south, but slow in the northern part. As predicted by our numerical models, the maximum present-day margin width increases almost linearly from the conjugate Equatorial margin segments to the Florianopolis/Walvis ridge. Even though the polarity of the magma-poor South Atlantic margins alternates, the asymmetry and the width of the wider margin are in very good agreement with our simulations. The described rift evolution has three fundamental implications: (1) It implies sustained transfer of material across the extensional plate boundary thereby predicting that large portions of a wide margin originate from its conjugate side. (2) Migration of the deformation locus causes faulting in the distal parts of the margin to postdate that of the proximal parts by as much as 10 million years. This means that syn-rift and post-rift phase are location-dependent. (3) Lateral movement of the rift centre generates drastically different peak heat flow and subsidence histories at the proximal and the distal margin.
Margination and demargination in confined multicomponent suspensions: a parametric study
Graham, Michael; Sinha, Kushal; Henriquez Rivera, Rafael
2014-11-01
Blood and other multicomponent suspensions display a segregation behavior in which different components are differentially distributed in the cross-stream direction during flow in a confined geometry such as an arteriole or a microfluidic device. In blood the platelets and leukocytes are strongly segregated to the near wall region and are said to be ``marginated.'' The effects of particle size, shape and rigidity on segregation behavior in confined simple shear flow of binary suspensions are computationally investigated here. The results show that in a mixture of particles with same shape and different membrane rigidity, the stiffer particles marginate while the flexible particles demarginate, moving toward the center of the channel. In a mixture of particle with same membrane rigidity and different shape, particles with smaller aspect ratio marginate while those with higher aspect ratio demarginate. These results are consistent with theoretical arguments based on wall-induced migration and pair collision dynamics. An analytical solution is presented for a model problem that reveals qualitatively different behavior in various parameter regimes. Finally, effects of viscoelasticity of the suspending phase on margination are examined. This work was supported by the NSF under Grants CBET-1132579 and CBET-1436082.
Identification Of Marginal Land Suitable For Biofuel Production In Serbia
Radojević Uroš
2015-11-01
Full Text Available The use of biomass as a potential energy source has both advantages and disadvantages. Biomass is a potential source of fuel energy that provides economic and environmental benefits such as less expensive and less energy intensive production, carbon sequestration and soil preservation. However, the main concern associated with biofuels is that land needed for food will be used for biofuel crops. One potential solution is the use of marginal lands which are not suited for food production. Marginal lands generally refer to the areas not only with low production, but also with limitations that make them unsuitable for agricultural practices and ecosystem functions. This can be due to various forms of land degradation such as pollution, surface exploitation of mineral resources, erosion, overexploitation and others. We used remotely sensed data, environmental data and field survey data to identify possible marginal lands in Serbia. All gathered data was transferred to GIS in order to create maps and database of potential marginal lands which could be used for biomass production.
Adaptive Statistical Language Modeling; A Maximum Entropy Approach
1994-04-19
recognition systems were built that could recognize vowels or digits, but they could not be successfully extended to handle more realistic language...maximum likelihood of gener- ating the training data. The identity of the ML and ME solutions, apart from being aesthetically pleasing, is extremely
Multitime maximum principle approach of minimal submanifolds and harmonic maps
Udriste, Constantin
2011-01-01
Some optimization problems coming from the Differential Geometry, as for example, the minimal submanifolds problem and the harmonic maps problem are solved here via interior solutions of appropriate multitime optimal control problems. Section 1 underlines some science domains where appear multitime optimal control problems. Section 2 (Section 3) recalls the multitime maximum principle for optimal control problems with multiple (curvilinear) integral cost functionals and $m$-flow type constraint evolution. Section 4 shows that there exists a multitime maximum principle approach of multitime variational calculus. Section 5 (Section 6) proves that the minimal submanifolds (harmonic maps) are optimal solutions of multitime evolution PDEs in an appropriate multitime optimal control problem. Section 7 uses the multitime maximum principle to show that of all solids having a given surface area, the sphere is the one having the greatest volume. Section 8 studies the minimal area of a multitime linear flow as optimal c...
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
Understanding Peripheral Bat Populations Using Maximum-Entropy Suitability Modeling
Barnhart, Paul R.; Gillam, Erin H.
2016-01-01
Individuals along the periphery of a species distribution regularly encounter more challenging environmental and climatic conditions than conspecifics near the center of the distribution. Due to these potential constraints, individuals in peripheral margins are expected to change their habitat and behavioral characteristics. Managers typically rely on species distribution maps when developing adequate management practices. However, these range maps are often too simplistic and do not provide adequate information as to what fine-scale biotic and abiotic factors are driving a species occurrence. In the last decade, habitat suitability modelling has become widely used as a substitute for simplistic distribution mapping which allows regional managers the ability to fine-tune management resources. The objectives of this study were to use maximum-entropy modeling to produce habitat suitability models for seven species that have a peripheral margin intersecting the state of North Dakota, according to current IUCN distributions, and determine the vegetative and climatic characteristics driving these models. Mistnetting resulted in the documentation of five species outside the IUCN distribution in North Dakota, indicating that current range maps for North Dakota, and potentially the northern Great Plains, are in need of update. Maximum-entropy modeling showed that temperature and not precipitation were the variables most important for model production. This fine-scale result highlights the importance of habitat suitability modelling as this information cannot be extracted from distribution maps. Our results provide baseline information needed for future research about how and why individuals residing in the peripheral margins of a species’ distribution may show marked differences in habitat use as a result of urban expansion, habitat loss, and climate change compared to more centralized populations. PMID:27935936
Performance Evaluation of Portfolios with Margin Requirements
Hui Ding; Zhongbao Zhou; Helu Xiao; Chaoqun Ma; Wenbin Liu
2014-01-01
In financial markets, short sellers will be required to post margin to cover possible losses in case the prices of the risky assets go up. Only a few studies focus on the optimization and performance evaluation of portfolios in the presence of margin requirements. In this paper, we investigate the theoretical foundation of DEA (data envelopment analysis) approach to evaluate the performance of portfolios with margin requirements from a different perspective. Under the mean-variance framework,...
Maximum entropy method for solving operator equations of the first kind
金其年; 侯宗义
1997-01-01
The maximum entropy method for linear ill-posed problems with modeling error and noisy data is considered and the stability and convergence results are obtained. When the maximum entropy solution satisfies the "source condition", suitable rates of convergence can be derived. Considering the practical applications, an a posteriori choice for the regularization parameter is presented. As a byproduct, a characterization of the maximum entropy regularized solution is given.
Faster Algorithms for Privately Releasing Marginals
Thaler, Justin R; Ullman, Jonathan Robert; Vadhan, Salil P.
2012-01-01
We study the problem of releasing $k$-way marginals of a database $D \\in (\\{0,1\\}^d)^n$, while preserving differential privacy. The answer to a $k$-way marginal query is the fraction of $D$'s records $x \\in \\{0,1\\}^d$ with a given value in each of a given set of up to $k$ columns. Marginal queries enable a rich class of statistical analyses of a dataset, and designing efficient algorithms for privately releasing marginal queries has been identified as an important open problem in private data...
Assessment of seismic margin calculation methods
Kennedy, R.P.; Murray, R.C.; Ravindra, M.K.; Reed, J.W.; Stevenson, J.D.
1989-03-01
Seismic margin review of nuclear power plants requires that the High Confidence of Low Probability of Failure (HCLPF) capacity be calculated for certain components. The candidate methods for calculating the HCLPF capacity as recommended by the Expert Panel on Quantification of Seismic Margins are the Conservative Deterministic Failure Margin (CDFM) method and the Fragility Analysis (FA) method. The present study evaluated these two methods using some representative components in order to provide further guidance in conducting seismic margin reviews. It is concluded that either of the two methods could be used for calculating HCLPF capacities. 21 refs., 9 figs., 6 tabs.
[Resection margins in conservative breast cancer surgery].
Medina Fernández, Francisco Javier; Ayllón Terán, María Dolores; Lombardo Galera, María Sagrario; Rioja Torres, Pilar; Bascuñana Estudillo, Guillermo; Rufián Peña, Sebastián
2013-01-01
Conservative breast cancer surgery is facing a new problem: the potential tumour involvement of resection margins. This eventuality has been closely and negatively associated with disease-free survival. Various factors may influence the likelihood of margins being affected, mostly related to the characteristics of the tumour, patient or surgical technique. In the last decade, many studies have attempted to find predictive factors for margin involvement. However, it is currently the new techniques used in the study of margins and tumour localisation that are significantly reducing reoperations in conservative breast cancer surgery. Copyright © 2012 AEC. Published by Elsevier Espana. All rights reserved.
On the evaluation of marginal expected shortfall
Caporin, Massimiliano; Santucci de Magistris, Paolo
2012-01-01
In the analysis of systemic risk, Marginal Expected Shortfall may be considered to evaluate the marginal impact of a single stock on the market Expected Shortfall. These quantities are generally computed using log-returns, in particular when there is also a focus on returns conditional distribution....... In this case, the market log-return is only approximately equal to the weighed sum of equities log-returns. We show that the approximation error is large during turbulent market phases, with a subsequent impact on Marginal Expected Shortfall. We then suggest how to improve the evaluation of Marginal Expected...
Optimizing Surgical Margins in Breast Conservation
Preya Ananthakrishnan
2012-01-01
Full Text Available Adequate surgical margins in breast-conserving surgery for breast cancer have traditionally been viewed as a predictor of local recurrence rates. There is still no consensus on what constitutes an adequate surgical margin, however it is clear that there is a trade-off between widely clear margins and acceptable cosmesis. Preoperative approaches to plan extent of resection with appropriate margins (in the setting of surgery first as well as after neoadjuvant chemotherapy, include mammography, US, and MRI. Improvements have been made in preoperative lesion localization strategies for surgery, as well as intraoperative specimen assessment, in order to ensure complete removal of imaging findings and facilitate margin clearance. Intraoperative strategies to accurately assess tumor and cavity margins include cavity shave techniques, as well as novel technologies for margin probes. Ablative techniques, including radiofrequency ablation as well as intraoperative radiation, may be used to extend tumor-free margins without resecting additional tissue. Oncoplastic techniques allow for wider resections while maintaining cosmesis and have acceptable local recurrence rates, however often involve surgery on the contralateral breast. As systemic therapy for breast cancer continues to improve, it is unclear what the importance of surgical margins on local control rates will be in the future.
On the Marginal Stability of Glassy Systems
Yan, Le; Baity-Jesi, Marco; Müller, Markus; Wyart, Matthieu
2015-03-01
In various glassy systems that are out of equilibrium, like spin glasses and granular packings, the dynamics appears to be critical: avalanches involving almost the whole system could happen. A recent conceptual breakthrough argues that such glassy systems sample the ensemble of marginal stable states, which inevitably results into critical dynamics. However, it is unclear how the marginal stability is dynamically guaranteed. We investigate this marginal stability assumption by studying specifically the critical athermal dynamics of the Sherrington-Kirkpatrick model. We discuss how a pseudo-gap in the density distribution of local fields characterizing the marginal stability arises dynamically.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Caffarelli, Luis; Nirenberg, Louis
2011-01-01
The paper concerns singular solutions of nonlinear elliptic equations, which include removable singularities for viscosity solutions, a strengthening of the Hopf Lemma including parabolic equations, Strong maximum principle and Hopf Lemma for viscosity solutions including also parabolic equations.
Values and marginal preferences in international business
Maseland, Robbert; van Hoorn, Andre
2010-01-01
In a recent paper in this journal, Maseland and van Hoorn argued that values surveys tend to conflate values and marginal preferences. This assertion has been challenged by Brewer and Venaik, who claim that the wording of most survey items does not suggest that these elicit marginal preferences. Thi
Marginal Utility and Convex Indifference Curves.
Jackson, A.A.
1981-01-01
Reviews discussion of the relationship between marginal utility and indifference curves which has been presented in recent issues of "Economics." Concludes that indifference analysis does not embody the assumptions of marginal utility theory and that there is no simple relationship between these concepts that does not entail unacceptable…
The homogeneous marginal utility of income assumption
Demuynck, T.
2015-01-01
We develop a test to verify if every agent from a population of heterogeneous consumers has the same marginal utility of income function. This homogeneous marginal utility of income assumption is often (implicitly) used in applied demand studies because it has nice aggregation properties and facilit
Tumor margin detection using optical biopsy techniques
Zhou, Yan; Liu, Cheng-hui; Li, Jiyou; Li, Zhongwu; Zhou, Lixin; Chen, Ke; Pu, Yang; He, Yong; Zhu, Ke; Li, Qingbo; Alfano, Robert R.
2014-03-01
The aim of this study is to use the Resonance Raman (RR) and fluorescence spectroscopic technique for tumor margin detection with high accuracy based on native molecular fingerprints of breast and gastrointestinal (GI) tissues. This tumor margins detection method utilizes advantages of RR spectroscopic technique in situ and in real-time to diagnose tumor changes providing powerful tools for clinical guiding intraoperative margin assessments and postoperative treatments. The tumor margin detection procedures by RR spectroscopy were taken by scanning lesion from center or around tumor region in ex-vivo to find the changes in cancerous tissues with the rim of normal tissues using the native molecular fingerprints. The specimens used to analyze tumor margins include breast and GI carcinoma and normal tissues. The sharp margin of the tumor was found by the changes of RR spectral peaks within 2 mm distance. The result was verified using fluorescence spectra with 300 nm, 320 nm and 340 nm excitation, in a typical specimen of gastric cancerous tissue within a positive margin in comparison with normal gastric tissues. This study demonstrates the potential of RR and fluorescence spectroscopy as new approaches with labeling free to determine the intraoperative margin assessment.
Impact of abutment rotation and angulation on marginal fit: theoretical considerations.
Semper, Wiebke; Kraft, Silvan; Mehrhof, Jurgen; Nelson, Katja
2010-01-01
Rotational freedom of various implant positional index designs has been previously calculated. To investigate its clinical relevance, a three-dimensional simulation was performed to demonstrate the influence of rotational displacements of the abutment on the marginal fit of prosthetic superstructures. Idealized abutments with different angulations (0, 5, 10, 15, and 20 degrees) were virtually constructed (SolidWorks Office Premium 2007). Then, rotational displacement was simulated with various degrees of rotational freedom (0.7, 0.95, 1.5, 1.65, and 1.85 degrees). The resulting horizontal displacement of the abutment from the original position was quantified in microns, followed by a simulated pressure-less positioning of superstructures with defined internal gaps (5 µm, 60 µm, and 100 µm). The resulting marginal gap between the abutment and the superstructure was measured vertically with the SolidWorks measurement tool. Rotation resulted in a displacement of the abutment of up to 157 µm at maximum rotation and angulation. Interference of a superstructure with a defined internal gap of 5 µm placed on the abutment resulted in marginal gaps up to 2.33 mm at maximum rotation and angulation; with a 60-µm internal gap, the marginal gaps reached a maximum of 802 µm. Simulation using a superstructure with an internal gap of 100 µm revealed a marginal gap of 162 µm at abutment angulation of 20 degrees and rotation of 1.85 degrees. The marginal gaps increased with the degree of abutment angulation and the extent of rotational freedom. Rotational displacement of the abutment influenced prosthesis misfit. The marginal gaps between the abutment and the superstructure increased with the rotational freedom of the index and the angulation of the abutment.
Shock margin testing of a one-axis MEMS accelerometer.
Parson, Ted Blair; Tanner, Danelle Mary; Buchheit, Thomas Edward
2008-07-01
Shock testing was performed on a selected commercial-off-the-shelf - MicroElectroMechanical System (COTS-MEMS) accelerometer to determine the margin between the published absolute maximum rating for shock and the 'measured' level where failures are observed. The purpose of this testing is to provide baseline data for isolating failure mechanisms under shock and environmental loading in a representative device used or under consideration for use within systems and assemblies of the DOD/DOE weapons complex. The specific device chosen for this study was the AD22280 model of the ADXL78 MEMS Accelerometer manufactured by Analog Devices Inc. This study focuses only on the shock loading response of the device and provides the necessary data for adding influence of environmental exposure to the reliability of this class of devices. The published absolute maximum rating for acceleration in any axis was 4000 G for this device powered or unpowered. Results from this study showed first failures at 8000 G indicating a margin of error of two. Higher shock level testing indicated that an in-plane, but off-axis acceleration was more damaging than one in the sense direction.
Boundary condition effects on maximum groundwater withdrawal in coastal aquifers.
Lu, Chunhui; Chen, Yiming; Luo, Jian
2012-01-01
Prevention of sea water intrusion in coastal aquifers subject to groundwater withdrawal requires optimization of well pumping rates to maximize the water supply while avoiding sea water intrusion. Boundary conditions and the aquifer domain size have significant influences on simulating flow and concentration fields and estimating maximum pumping rates. In this study, an analytical solution is derived based on the potential-flow theory for evaluating maximum groundwater pumping rates in a domain with a constant hydraulic head landward boundary. An empirical correction factor, which was introduced by Pool and Carrera (2011) to account for mixing in the case with a constant recharge rate boundary condition, is found also applicable for the case with a constant hydraulic head boundary condition, and therefore greatly improves the usefulness of the sharp-interface analytical solution. Comparing with the solution for a constant recharge rate boundary, we find that a constant hydraulic head boundary often yields larger estimations of the maximum pumping rate and when the domain size is five times greater than the distance between the well and the coastline, the effect of setting different landward boundary conditions becomes insignificant with a relative difference between two solutions less than 2.5%. These findings can serve as a preliminary guidance for conducting numerical simulations and designing tank-scale laboratory experiments for studying groundwater withdrawal problems in coastal aquifers with minimized boundary condition effects.
A Maximum Entropy Method for a Robust Portfolio Problem
Yingying Xu
2014-06-01
Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.
Generalized Relativistic Wave Equations with Intrinsic Maximum Momentum
Ching, Chee Leong
2013-01-01
We examine the nonperturbative effect of maximum momentum on the relativistic wave equations. In momentum representation, we obtain the exact eigen-energies and wavefunctions of one-dimensional Klein-Gordon and Dirac equation with linear confining potentials, and the Dirac oscillator. Bound state solutions are only possible when the strength of scalar potential are stronger than vector potential. The energy spectrum of the systems studied are bounded from above, whereby classical characteristics are observed in the uncertainties of position and momentum operators. Also, there is a truncation in the maximum number of bound states that is allowed. Some of these quantum-gravitational features may have future applications.
Generalized relativistic wave equations with intrinsic maximum momentum
Ching, Chee Leong; Ng, Wei Khim
2014-05-01
We examine the nonperturbative effect of maximum momentum on the relativistic wave equations. In momentum representation, we obtain the exact eigen-energies and wave functions of one-dimensional Klein-Gordon and Dirac equation with linear confining potentials, and the Dirac oscillator. Bound state solutions are only possible when the strength of scalar potential is stronger than vector potential. The energy spectrum of the systems studied is bounded from above, whereby classical characteristics are observed in the uncertainties of position and momentum operators. Also, there is a truncation in the maximum number of bound states that is allowed. Some of these quantum-gravitational features may have future applications.
Efficiency of autonomous soft nanomachines at maximum power.
Seifert, Udo
2011-01-14
We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.
Study on in-situ stress measurement around coastal marginal land in Fujian
LI Hong; AN Qi-mei; XIE Fu-ren
2005-01-01
The in-situ hydraulic fracturing stress measurements have been carried out around the coastal marginal land in Fujian Province. And the characteristics of magnitude, direction and distribution of tectonic stress have been obtained.Based on the observed stress data, the characteristics and activities of fault zones are analyzed and studied in the paper according to the Coulomb friction criteria. ① The maximum horizontal principal compressive stress is in the NW-WNW direction from the north to the south along the coastline verge, which is parallel to the strike of the NW-trending fault zone, consistent with the direction of principal compressive stress obtained from geological structure and across-fault deformation data, and different from that reflected by focal mechanism solution by about 20°. ② The horizontal principal stress increases with depth, the relation among three stresses is SH＞Sv＞Sh or SH≈Sv＞Sh, and the stress state is liable to normal fault and strike-slip fault activities. ③ According to Coulomb friction criteria and taking the friction strength μ as 0.6～1.0 for analysis, the stress state reaching or exceeding the threshold for normal-fault frictional sliding near the fault implies that the current tectonic activity in the measuring area is mainly normal faulting. ④ The force source of current tectonic stress field comes mainly from the westward and northwestward horizontal extrusions from the Pacific and Philippine Plates respectively to the Eurasian Plate.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Performance Evaluation of Portfolios with Margin Requirements
Hui Ding
2014-01-01
Full Text Available In financial markets, short sellers will be required to post margin to cover possible losses in case the prices of the risky assets go up. Only a few studies focus on the optimization and performance evaluation of portfolios in the presence of margin requirements. In this paper, we investigate the theoretical foundation of DEA (data envelopment analysis approach to evaluate the performance of portfolios with margin requirements from a different perspective. Under the mean-variance framework, we construct the optimization model and portfolio possibility set on considering margin requirements. The convexity of the portfolio possibility set is proved and the concept of efficiency in classical economics is extended to the portfolio case. The DEA models are then developed to evaluate the performance of portfolios with margin requirements. Through the simulations carried out in the end, we show that, with adequate portfolios, DEA can be used as an effective tool in computing the efficiencies of portfolios with margin requirements for the performance evaluation purpose. This study can be viewed as a justification of DEA into performance evaluation of portfolios with margin requirements.
Influence of thermal stress on marginal integrity of restorative materials
Maximiliano Sérgio Cenci
2008-04-01
Full Text Available The aim of this study was to evaluate the influence of thermal stress on the marginal integrity of restorative materials with different adhesive and thermal properties. Three hundred and sixty Class V cavities were prepared in buccal and lingual surfaces of 180 bovine incisors. Cervical and incisal walls were located in dentin and enamel, respectively. Specimens were restored with resin composite (RC; glass ionomer (GI or amalgam (AM, and randomly assigned to 18 groups (n=20 according to the material, number of cycles (500 or 1,000 cycles and dwell time (30 s or 60 s. Dry and wet specimens served as controls Specimens were immersed in 1% basic fuchsine solution (24 h, sectioned, and microleakage was evaluated under x40 magnification. Data were analyzed by Kruskal-Wallis and Mann-Whitney tests: Thermal cycling regimens increased leakage in all AM restorations (p<0.05 and its effect on RC and GI restorations was only significant when a 60-s dwell time was used (p<0.05. Marginal integrity was more affected in AM restorations under thermal cycling stress, whereas RC and GI ionomer restoration margins were only significantly affected only under longer dwell times.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Approximate maximum-entropy moment closures for gas dynamics
McDonald, James G.
2016-11-01
Accurate prediction of flows that exist between the traditional continuum regime and the free-molecular regime have proven difficult to obtain. Current methods are either inaccurate in this regime or prohibitively expensive for practical problems. Moment closures have long held the promise of providing new, affordable, accurate methods in this regime. The maximum-entropy hierarchy of closures seems to offer particularly attractive physical and mathematical properties. Unfortunately, several difficulties render the practical implementation of maximum-entropy closures very difficult. This work examines the use of simple approximations to these maximum-entropy closures and shows that physical accuracy that is vastly improved over continuum methods can be obtained without a significant increase in computational cost. Initially the technique is demonstrated for a simple one-dimensional gas. It is then extended to the full three-dimensional setting. The resulting moment equations are used for the numerical solution of shock-wave profiles with promising results.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Application of geoinformation techniques in sustainable development of marginal rural
Leszczynska, G.
2009-04-01
The basic objective of the studies is to create a geographic information system that would assure integration of activities aimed at protecting biological diversity with sustainable development of marginal rural areas through defining the conditions for development of tourism and recreation in the identified areas. The choice of that solution is a consequence of the fact that numerous phenomena and processes presented in maps are linked to functional relations or they can be viewed as functions of space, time and attributes. The paper presents the system development stage aimed at elaborating the template for the system serving solution of the above-presented problem. In case of this issue the geographic information system will be developed to support development of marginal rural areas through selection of appropriate forms of tourism for the endangered areas including indication of locations for development of appropriate tourist infrastructure. Selection of the appropriate form of tourism will depend on natural, tourist and infrastructure values present in a given area and conditioned by the need to present the biodiversity component present in those areas together with elements of traditional agricultural landscape. The most important problem is to reconcile two seemingly contradictory aims: 1. Preventing social and economic marginalization of the restructured rural areas. 2. Preserving biological diversity in the restructured areas.Agriculture influences many aspects of the natural environment such as water resources, biodiversity and status of natural habitats, status of soils, landscape and, in a wider context, the climate. Project implementation will involve application of technologies allowing analysis of the systems for managing marginal rural areas as spatial models based on geographic information systems. Modelling of marginal rural areas management using the GIS technologies will involve creating spatial models of actual objects. On the basis of data
MAXIMUM PRINCIPLES FOR SECOND-ORDER PARABOLIC EQUATIONS
Antonio Vitolo
2004-01-01
This paper is the parabolic counterpart of previous ones about elliptic operators in unbounded domains. Maximum principles for second-order linear parabolic equations are established showing a variant of the ABP-Krylov-Tso estimate, based lower bound for super-solutions due to Krylov and Safonov. The results imply the uniqueness for the Cauchy-Dirichlet problem in a large class of infinite cylindrical and non-cylindrical domains.
On some method of the space elevator maximum stress reduction
Ambartsumian S. A.
2007-03-01
Full Text Available The possibility of the realization and exploitation of the space elevator project is connected with a number of complicated problems. One of them are large elastic stresses arising in the space elevator ribbon body, which are considerably bigger that the limit of strength of modern materials. This note is devoted to the solution of problem of maximum stress reduction in the ribbon by the modification of the ribbon cross-section area.
Maximum-Entropy Inference with a Programmable Annealer.
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2016-03-03
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
Enrico Zio
2008-01-01
Full Text Available In the present work, the uncertainties affecting the safety margins estimated from thermal-hydraulic code calculations are captured quantitatively by resorting to the order statistics and the bootstrap technique. The proposed framework of analysis is applied to the estimation of the safety margin, with its confidence interval, of the maximum fuel cladding temperature reached during a complete group distribution blockage scenario in a RBMK-1500 nuclear reactor.
Exploration of the continental margins of India
Siddiquie, H.N.; Hashimi, N.H.; Vora, K.H.; Pathak, M.C.
In mid 1970's the National Institute of Oceanography, Goa, India prepared a plan for systematic regional, geological and geophysical surveys of the continental margins of India. This involved over 75,000 km of underway (bathymetric, side scan sonar...
Mental Depreciation and Marginal Decision Making
Heath; Fennema
1996-11-01
We propose that individuals practice "mental depreciation," that is, they implicitly spread the fixed costs of their expenses over time or use. Two studies explore how people spread fixed costs on durable goods. A third study shows that depreciation can lead to two distinct errors in marginal decisions: First, people sometimes invest too much effort to get their money's worth from an expense (e.g., they may use a product a lot to spread the fixed expense across more uses). Second, people sometimes invest too little effort to get their money's worth: When people add a portion of the fixed cost to the current costs, their perceived marginal (i.e., incremental) costs exceed their true marginal costs. In response, they may stop investing because their perceived costs surpass the marginal benefits they are receiving. The latter effect is supported by two field studies that explore real board plan decisions by university students.
Louis de Grange
2010-09-01
Full Text Available Maximum entropy models are often used to describe supply and demand behavior in urban transportation and land use systems. However, they have been criticized for not representing behavioral rules of system agents and because their parameters seems to adjust only to modeler-imposed constraints. In response, it is demonstrated that the solution to the entropy maximization problem with linear constraints is a multinomial logit model whose parameters solve the likelihood maximization problem of this probabilistic model. But this result neither provides a microeconomic interpretation of the entropy maximization problem nor explains the equivalence of these two optimization problems. This work demonstrates that an analysis of the dual of the entropy maximization problem yields two useful alternative explanations of its solution. The first shows that the maximum entropy estimators of the multinomial logit model parameters reproduce rational user behavior, while the second shows that the likelihood maximization problem for multinomial logit models is the dual of the entropy maximization problem.
Time Safety Margin: Theory and Practice
2016-09-01
safety plan as the test event approaches. Did the planners miss anything? What doesn’t make sense ? Is it too conservative or not conservative enough? The...seconds of TSM for a 10-degree dive at 150 KTAS, producing inefficiency with excessive margin. Somewhere in the middle, the restriction makes sense , but...412TW-TIH-16-01 TIME SAFETY MARGIN: THEORY AND PRACTICE WILLIAM R. GRAY, III Chief Test Pilot USAF Test Pilot School SEPTEMBER 2016
Marketing margins and agricultural technology in Mozambique
Arndt, Channing; Jensen, Henning Tarp; Robinson, Sherman
2000-01-01
Improvements in agricultural productivity and reductions in marketing costs in Mozambique are analysed using a computable general equilibrium (CGE) model. The model incorporates detailed marketing margins and separates household demand for marketed and home-produced goods. Individual simulations...... of improved agricultural technology and lower marketing margins yield welfare gains across the economy. In addition, a combined scenario reveals significant synergy effects, as gains exceed the sum of gains from the individual scenarios. Relative welfare improvements are higher for poor rural households...
Statistical Analysis of Thermal Analysis Margin
Garrison, Matthew B.
2011-01-01
NASA Goddard Space Flight Center requires that each project demonstrate a minimum of 5 C margin between temperature predictions and hot and cold flight operational limits. The bounding temperature predictions include worst-case environment and thermal optical properties. The purpose of this work is to: assess how current missions are performing against their pre-launch bounding temperature predictions and suggest any possible changes to the thermal analysis margin rules
Investigating Mechanisms of Marginal Settlement Life Improvement
Hamid M. Mohammadi
2008-01-01
Full Text Available The main purpose of this research was to investigate mechanisms to improve marginal settlement life in Koohdasht County in Lorestan province. This research was a sort of the survey studies and a questionnaire was compiled for collection of data. Statistical population of this study was included 1560 households; also sampling method was a sort of random sampling. Number of sample size was estimated 85 households. Questionnaire's reliability was confirmed through computing Cronbach's alpha coefficient which was 0.85. Face validity of questionnaire was confirmed by some Tehran university agricultural extension and education department scientific board members. Also data analyzed by WINspss 11.5. The results of research revealed that marginal area residents had not good financial situation but they undertook great supporting burden and in point of view access to services and life conditions had not good situation. Therefore improvement of life conditions of marginal settlement life such as fundamental infrastructure include communication systems and sanitation offloading system recognized as the most important mechanisms of marginal settlement improvement according to results of priority setting of marginal settlement situation mechanisms. Also the results of factor analysis showed that 7 main mechanisms were be effective in term of marginal settlement life improvement that in order to importance were included servicing and life condition improvement, credit-economic, civil and legal, control and prevention, population and migration control, infrastructures improvement and hygiene situation.
Munoz, Sergio; Ramos, Van; Dickinson, Douglas P
2017-07-01
The recent application of printing for the fabrication of dental restorations has not been compared and evaluated for margin discrepancy (margin fit) with restorations fabricated using milling and conventional hand-waxing techniques. The purpose of this in vitro study was to evaluate and compare margin discrepancy of complete gold crowns (CGCs) fabricated from printed, milled, and conventional hand-waxed patterns. Thirty crown patterns were produced by each of 3 different methods: printed by ProJet DP 3000, milled by LAVA CNC 500, and hand waxed, then invested and cast into CGCs. Each crown was evaluated at 10 positions around the margin on the corresponding epoxy die under ×50 light microscopy to determine the mean and maximum margin discrepancy. Measurements were made using a micrometer positioning stage. The results were compared by ANOVA (α=.05). Milled and hand-waxed patterns were not statistically different from each other (P>.05), while printed patterns produced significantly higher mean and maximum margin discrepancy than milled and hand-waxed patterns (Ppatterns were not significantly different from each other. The ProJet DP 3000 printed patterns were significantly different from LAVA CNC 500 milled and hand-waxed patterns, with an overall poorer result. Fabricating CGCs from printed patterns produced a significantly higher number of crowns with unacceptable margin discrepancy (>120 μm). Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
A comparison of pay-as-bid and marginal pricing in electricity markets
Ren, Yongjun
This thesis investigates the behaviour of electricity markets under marginal and pay-as-bid pricing. Marginal pricing is believed to yield the maximum social welfare and is currently implemented by most electricity markets. However, in view of recent electricity market failures, pay-as-bid has been extensively discussed as a possible alternative to marginal pricing. In this research, marginal and pay-as-bid pricing have been analyzed in electricity markets with both perfect and imperfect competition. The perfect competition case is studied under both exact and uncertain system marginal cost prediction. The comparison of the two pricing methods is conducted through two steps: (i) identify the best offer strategy of the generating companies (gencos); (ii) analyze the market performance under these optimum genco strategies. The analysis results together with numerical simulations show that pay-as-bid and marginal pricing are equivalent in a perfect market with exact system marginal cost prediction. In perfect markets with uncertain demand prediction, the two pricing methods are also equivalent but in an expected value sense. If we compare from the perspective of second order statistics, all market performance measures exhibit much lower values under pay-as-bid than under marginal pricing. The risk of deviating from the mean is therefore much higher under marginal pricing than under pay-as-bid. In an imperfect competition market with exact demand prediction, the research shows that pay-as-bid pricing yields lower consumer payments and lower genco profits. This research provides quantitative evidence that challenges some common claims about pay-as-bid pricing. One is that under pay-as-bid, participants would soon learn how to offer so as to obtain the same or higher profits than what they would have obtained under marginal pricing. This research however shows that, under pay-as-bid, participants can at best earn the same profit or expected profit as under marginal
Marginal Bidding: An Application of the Equimarginal Principle to Bidding in TAC SCM
Greenwald, Amy; Naroditskiy, Victor; Odean, Tyler; Ramirez, Mauricio; Sodomka, Eric; Zimmerman, Joe; Cutler, Clark
We present a fast and effective bidding strategy for the Trading Agent Competition in Supply Chain Management (TAC SCM). In TAC SCM, manufacturers compete to procure computer parts from suppliers (the procurement problem), and then sell assembled computers to customers in reverse auctions (the bidding problem). This paper is concerned only with bidding, in which an agent must decide how many computers to sell and at what prices to sell them. We propose a greedy solution, Marginal Bidding, inspired by the Equimarginal Principle, which states that revenue is maximized among possible uses of a resource when the return on the last unit of the resource is the same across all areas of use. We show experimentally that certain variations of Marginal Bidding can compute bids faster than our ILP solution, which enables Marginal Bidders to consider future demand as well as current demand, and hence achieve greater revenues when knowledge of the future is valuable.
A Sufficient Condition for Power Flow Insolvability with Applications to Voltage Stability Margins
Molzahn, Daniel K; DeMarco, Christopher L
2012-01-01
For the nonlinear power flow problem specified with standard PQ, PV, and slack bus equality constraints, we present a sufficient condition under which the specified set of nonlinear algebraic equations has no solution. This sufficient condition is constructed in a framework of an associated feasible, convex optimization problem. The objective employed in this optimization problem yields a measure of distance (in a parameter set) to the power flow solution boundary. In practical terms, this distance is closely related to quantities that previous authors have proposed as voltage stability margins. A typical margin is expressed in terms of the parameters of system loading (injected powers); here we additionally introduce a new margin in terms of the parameters of regulated bus voltages.
Afshar H
2006-07-01
Full Text Available Background and Aim: The need for recrimping precrimped stainless steel crowns by the dentist in clinic is controversial. This study aimed to evaluate the rate of marginal circumference and marginal thickness change of precrimped stainless steel crowns after recrimping. Materials and Methods: In this experimental study, 30 primary photos were taken from margins of 30 S.S.Cs (3M, Ni-Cr related to tooth 85 with a digital camera fixed at a determined distance. Margins of crowns were crimped by 114 and 137 pliers with a controlled force (0.2 N and then 30 secondary photos were taken in the same conditions. The circumference of crown margins in primary (group A and secondary (group B photos were assessed by a digitizer system. Comparing the circumferences of crown margins in primary and secondary photos showed a significant decrease after crimping. Thickness of 30 random points on the crown margins of a crown similar to mentioned cases was measured by SEM (×150. Then similar procedures including taking a primary photo, crimping and taking a secondary photo was done for the sample crown. After significant reduction in margin circumference, thickness of 30 other random points on the crown margin were measured by SEM. Data were analyzed by paired sample t-test with p<0.05 as the limit of significance. Results: The mean marginal circumference of precrimped stainless steel crowns was reduced by 7.3% which was significant (P<0.001. On the other hand the mean marginal thickness of sample stainless steel crown showed 18µ increase. Conclusion: According to the results of this study, marginal circumference of precrimped stainless steel crowns (3M, Ni-Cr showed a significant decrease after crimping. It is concluded that crimping the stainless steel crowns even for precrimped ones seems necessary.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Energy Momentum Tensor and Marginal Deformations in Open String Field Theory
Sen, A
2004-01-01
Marginal boundary deformations in a two dimensional conformal field theory correspond to a family of classical solutions of the equations of motion of open string field theory. In this paper we develop a systematic method for relating the parameter labelling the marginal boundary deformation in the conformal field theory to the parameter labelling the classical solution in open string field theory. This is done by first constructing the energy-momentum tensor associated with the classical solution in open string field theory using Noether method, and then comparing this to the answer obtained in the conformal field theory by analysing the boundary state. We also use this method to demonstrate that in open string field theory the tachyon lump solution on a circle of radius larger than one has vanishing pressure along the circle direction, as is expected for a codimension one D-brane.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Inverse feasibility problems of the inverse maximum ﬂow problems
Adrian Deaconu; Eleonor Ciurea
2013-04-01
A linear time method to decide if any inverse maximum ﬂow (denoted General Inverse Maximum Flow problems (IMFG)) problem has solution is deduced. If IMFG does not have solution, methods to transform IMFG into a feasible problem are presented. The methods consist of modifying as little as possible the restrictions to the variation of the bounds of the ﬂow. New inverse combinatorial optimization problems are introduced and solved.
Payoff-monotonic game dynamics and the maximum clique problem.
Pelillo, Marcello; Torsello, Andrea
2006-05-01
Evolutionary game-theoretic models and, in particular, the so-called replicator equations have recently proven to be remarkably effective at approximately solving the maximum clique and related problems. The approach is centered around a classic result from graph theory that formulates the maximum clique problem as a standard (continuous) quadratic program and exploits the dynamical properties of these models, which, under a certain symmetry assumption, possess a Lyapunov function. In this letter, we generalize previous work along these lines in several respects. We introduce a wide family of game-dynamic equations known as payoff-monotonic dynamics, of which replicator dynamics are a special instance, and show that they enjoy precisely the same dynamical properties as standard replicator equations. These properties make any member of this family a potential heuristic for solving standard quadratic programs and, in particular, the maximum clique problem. Extensive simulations, performed on random as well as DIMACS benchmark graphs, show that this class contains dynamics that are considerably faster than and at least as accurate as replicator equations. One problem associated with these models, however, relates to their inability to escape from poor local solutions. To overcome this drawback, we focus on a particular subclass of payoff-monotonic dynamics used to model the evolution of behavior via imitation processes and study the stability of their equilibria when a regularization parameter is allowed to take on negative values. A detailed analysis of these properties suggests a whole class of annealed imitation heuristics for the maximum clique problem, which are based on the idea of varying the parameter during the imitation optimization process in a principled way, so as to avoid unwanted inefficient solutions. Experiments show that the proposed annealing procedure does help to avoid poor local optima by initially driving the dynamics toward promising regions in
The role of pressure anisotropy on the maximum mass of cold compact stars
Karmakar, S.; Mukherjee, S.; Sharma, R.; Maharaj, S.D.
2007-01-01
We study the physical features of a class of exact solutions for cold compact anisotropic stars. The effect of pressure anisotropy on the maximum mass and surface redshift is analysed in the Vaidya-Tikekar model. It is shown that maximum compactness, redshift and mass increase in the presence of anisotropic pressures; numerical values are generated which are in agreement with observation.
Reconstructing Rodinia by Fitting Neoproterozoic Continental Margins
Stewart, John H.
2009-01-01
Reconstructions of Phanerozoic tectonic plates can be closely constrained by lithologic correlations across conjugate margins by paleontologic information, by correlation of orogenic belts, by paleomagnetic location of continents, and by ocean floor magmatic stripes. In contrast, Proterozoic reconstructions are hindered by the lack of some of these tools or the lack of their precision. To overcome some of these difficulties, this report focuses on a different method of reconstruction, namely the use of the shape of continents to assemble the supercontinent of Rodinia, much like a jigsaw puzzle. Compared to the vast amount of information available for Phanerozoic systems, such a limited approach for Proterozoic rocks, may seem suspect. However, using the assembly of the southern continents (South America, Africa, India, Arabia, Antarctica, and Australia) as an example, a very tight fit of the continents is apparent and illustrates the power of the jigsaw puzzle method. This report focuses on Neoproterozoic rocks, which are shown on two new detailed geologic maps that constitute the backbone of the study. The report also describes the Neoproterozoic, but younger or older rocks are not discussed or not discussed in detail. The Neoproterozoic continents and continental margins are identified based on the distribution of continental-margin sedimentary and magmatic rocks that define the break-up margins of Rodinia. These Neoproterozoic continental exposures, as well as critical Neo- and Meso-Neoproterozoic tectonic features shown on the two new map compilations, are used to reconstruct the Mesoproterozoic supercontinent of Rodinia. This approach differs from the common approach of using fold belts to define structural features deemed important in the Rodinian reconstruction. Fold belts are difficult to date, and many are significantly younger than the time frame considered here (1,200 to 850 Ma). Identifying Neoproterozoic continental margins, which are primarily
Climate change and evolutionary adaptations at species' range margins.
Hill, Jane K; Griffiths, Hannah M; Thomas, Chris D
2011-01-01
During recent climate warming, many insect species have shifted their ranges to higher latitudes and altitudes. These expansions mirror those that occurred after the Last Glacial Maximum when species expanded from their ice age refugia. Postglacial range expansions have resulted in clines in genetic diversity across present-day distributions, with a reduction in genetic diversity observed in a wide range of insect taxa as one moves from the historical distribution core to the current range margin. Evolutionary increases in dispersal at expanding range boundaries are commonly observed in virtually all insects that have been studied, suggesting a positive feedback between range expansion and the evolution of traits that accelerate range expansion. The ubiquity of this phenomenon suggests that it is likely to be an important determinant of range changes. A better understanding of the extent and speed of adaptation will be crucial to the responses of biodiversity and ecosystems to climate change.
Estimation of marginal costs at existing waste treatment facilities.
Martinez-Sanchez, Veronica; Hulgaard, Tore; Hindsgaul, Claus; Riber, Christian; Kamuk, Bettina; Astrup, Thomas F
2016-04-01
address and include costs in existing waste facilities in decision-making may unintendedly lead to higher overall costs at societal level. To avoid misleading conclusions, economic assessment of alternative SWM solutions should not only consider potential costs associated with alternative treatment but also include marginal costs associated with existing facilities.
Design of Simplified Maximum-Likelihood Receivers for Multiuser CPM Systems
Li Bing
2014-01-01
Full Text Available A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases reduced complexity and marginal performance degradation.
Design of simplified maximum-likelihood receivers for multiuser CPM systems.
Bing, Li; Bai, Baoming
2014-01-01
A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.
Abrupt plate accelerations shape rifted continental margins
Brune, Sascha; Williams, Simon E.; Butterworth, Nathaniel P.; Müller, R. Dietmar
2016-08-01
Rifted margins are formed by persistent stretching of continental lithosphere until breakup is achieved. It is well known that strain-rate-dependent processes control rift evolution, yet quantified extension histories of Earth’s major passive margins have become available only recently. Here we investigate rift kinematics globally by applying a new geotectonic analysis technique to revised global plate reconstructions. We find that rifted margins feature an initial, slow rift phase (less than ten millimetres per year, full rate) and that an abrupt increase of plate divergence introduces a fast rift phase. Plate acceleration takes place before continental rupture and considerable margin area is created during each phase. We reproduce the rapid transition from slow to fast extension using analytical and numerical modelling with constant force boundary conditions. The extension models suggest that the two-phase velocity behaviour is caused by a rift-intrinsic strength-velocity feedback, which can be robustly inferred for diverse lithosphere configurations and rheologies. Our results explain differences between proximal and distal margin areas and demonstrate that abrupt plate acceleration during continental rifting is controlled by the nonlinear decay of the resistive rift strength force. This mechanism provides an explanation for several previously unexplained rapid absolute plate motion changes, offering new insights into the balance of plate driving forces through time.
Theory of margination in confined multicomponent suspensions
Henriquez Rivera, Rafael; Sinha, Kushal; Graham, Michael
2015-11-01
In blood flow, leukocytes and platelets tend to segregate near the vessel walls; this is known as margination. Margination of leukocytes and platelets is important in physiological processes, medical diagnostics and drug delivery. A mechanistic theory is developed to describe flow-induced segregation in confined multicomponent suspensions of deformable particles such as blood. The theory captures the essential features of margination by describing it in terms of two key competing processes in these systems at low Reynolds number: wall-induced migration and hydrodynamic pair collisions. The theory also includes the effect of physical properties of the deformable particles and molecular diffusion. Several regimes of segregation are identified, depending on the value of a ``margination parameter'' M. Moreover, there is a critical value of M below which a sharp ``drainage transition'' occurs: one component is completely depleted from the bulk flow to the vicinity of the walls. Direct hydrodynamic simulations also display this transition in suspensions where the components differ in size or flexibility. The developed mechanistic theory leads to substantial insight into the origins of margination and will help in guiding development of new technologies involving multicomponent suspensions. This work was supported by NSF grant CBET-1436082.
Collapse of modern carbonate platform margins
Mullins, H.T.; Hine, A.C.; Gardulski, A.
1985-01-01
Modern carbonate platform margins in the Florida-Bahama region have been viewed as depositional or constructional features. However, recent studies have shown that carbonate escarpments, such as the Blake-Bahama and West Florida Escarpments, are erosional in origin where the platform margins have a scalloped or horse-shoe shape. Seismic reflection data from one of these crescentic features along the west Florida platform margin indicate that it originated by large scale gravity collapse (slump). This collapse structure extends for at least 120 km along the margin and has removed about 350 m of strata as young as early Neogene. Although at least three generations of slope failure are recognized, catastrophic collapse appears to have occurred in the mid-Miocene. Gravitational instability due to high rates of sediment accumulation may have been the triggering mechanism. These data suggest that submarine slumping is an important process in the retreat of limestone escarpments and in the generation of carbonate megabreccia debris flows. Scalloped platform margins occur on satellite images of northern Exuma Sound and Columbus Basin in the Bahamas. The authors suggest that large-scale submarine slumping can cause elongation of structurally controlled intraplatform basins (Exuma South), and produce anomalous horse-shoe shaped basins (Columbus Basin) by mega-collapse processes.
Abrupt plate accelerations shape rifted continental margins.
Brune, Sascha; Williams, Simon E; Butterworth, Nathaniel P; Müller, R Dietmar
2016-08-11
Rifted margins are formed by persistent stretching of continental lithosphere until breakup is achieved. It is well known that strain-rate-dependent processes control rift evolution, yet quantified extension histories of Earth's major passive margins have become available only recently. Here we investigate rift kinematics globally by applying a new geotectonic analysis technique to revised global plate reconstructions. We find that rifted margins feature an initial, slow rift phase (less than ten millimetres per year, full rate) and that an abrupt increase of plate divergence introduces a fast rift phase. Plate acceleration takes place before continental rupture and considerable margin area is created during each phase. We reproduce the rapid transition from slow to fast extension using analytical and numerical modelling with constant force boundary conditions. The extension models suggest that the two-phase velocity behaviour is caused by a rift-intrinsic strength--velocity feedback, which can be robustly inferred for diverse lithosphere configurations and rheologies. Our results explain differences between proximal and distal margin areas and demonstrate that abrupt plate acceleration during continental rifting is controlled by the nonlinear decay of the resistive rift strength force. This mechanism provides an explanation for several previously unexplained rapid absolute plate motion changes, offering new insights into the balance of plate driving forces through time.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Moore, Joseph A; Gordon, John J; Anscher, Mitchell S; Siebers, Jeffrey V
2009-09-01
of one case in which PTP gave a slightly higher TCP. For critical structures that do not meet the optimization criteria, PTP shows a decrease in the volume receiving the maximum specified dose. PTP reduces local normal tissue volumes receiving the maximum dose on average by 48%. PTP results in lower mean dose to all critical structures for all plans. PTP results in a 2.5% increase in the probability of uncomplicated control (P+), along with a 1.9% reduction in rectum normal tissue complication probability (NTCP), and a 0.7% reduction in bladder NTCP. PTP-based plans show improved conformality as compared with margin-based plans with an average PTP-based dosimetric margin at 7100 cGy of 0.65 cm compared with the margin-based 0.90 cm and a PTP-based dosimetric margin at 3960 cGy of 1.60 cm compared with the margin-based 1.90 cm. PTP-based plans show similar sensitivity to variations of the uncertainty during treatment from the uncertainty used in planning as compared to margin-based plans. For equal target coverage, when compared to margin-based plans, PTP results in equal or lower doses to normal structures. PTP results in more conformal plans than margin-based plans and shows similar sensitivity to variations in uncertainty.
Digital Margins : How spatially and socially marginalized communities deal with digital exclusion
Salemink, Koen
2016-01-01
The increasing importance of the Internet as a means of communication has transformed economies and societies. For spatially and socially marginalized communities, this transformation has resulted in digital exclusion and further marginalization. This book presents a study of two kinds of marginaliz
Marginalization revisited: critical, postmodern, and liberation perspectives.
Hall, J M
1999-12-01
Marginalization was advocated by Hall, Stevens, and Meleis in 1994 as a guiding concept for valuing diversity in knowledge development. Properties, risks, and resilience associated with the concept were detailed. This conceptualization of marginalization is reexamined here for its sociopolitical usefulness to nursing, from (1) critical theory, (2) postmodern, and (3) liberation philosophy perspectives. Additional properties are proposed to update the original conceptualization. These include: exteriority, Eurocentrism, constraint, economics, seduction, testimony, and hope. Effects of Eurocentric capitalism on all marginalized people are explored. Nursing implications include the need for interdisciplinary dialogue about the ethics of promoting and exporting Eurocentrism in nursing education and practice, and the need for integrated economic analyses of all aspects of life and health.
Pathogenesis of splenic marginal zone lymphoma
Ming-Qing Du
2015-11-01
Full Text Available Splenic marginal zone lymphoma (SMZL is a distinct low grade B-cell lymphoma with an immunophenotype similar to that of splenic marginal zone B-cells. Like the normal splenic marginal zone B-cells, SMZLs also show variable features in somatic mutations of their rearranged immunoglobulin genes, with ∼90% of cases harbouring somatic mutations but at remarkably variable degrees, suggesting that SMZL may have multiple cell of origins, deriving from the heterogeneous B-cells of the splenic marginal zone. Notably, ∼30% of SMZLs show biased usage of IGHV1-2*04, with the expressed BCR being potentially polyreactive to autoantigens. Recent exome and targeted sequencing studies have identified a wide spectrum of somatic mutations in SMZL with the recurrent mutations targeting multiple signalling pathways that govern the development of splenic marginal zone B-cells. These recurrent mutations occur in KLF2 (20–42%, NOTCH2 (6.5–25%, NF-κB (CARD11 ∼7%, IKBKB ∼7%, TNFAIP3 7–13%, TRAF3 5%, BIRC3 6.3% and TLR (MYD88 5–13% signalling pathways. Interestingly, the majority of SMZL with KLF2 mutation have both 7q32 deletion and IGHV1-2 rearrangement, and these cases also have additional mutations in NOTCH2, or TNFAIP3, or TRAF3. There is a potential oncogenic cooperation among concurrent genetic changes, for example between the IGHV1-2 expressing BCR and KLF2 mutation in activation of the canonical NF-κB pathway, and between KLF2 and TRAF3 mutations in activation of the non-canonical NF-κB pathway. These novel genetic findings have provided considerable insights into the pathogenesis of SMZL and will stimulate the research in both normal and malignant marginal zone B-cells.
Working marginal reserves using Auger technology
Celada Tamames, B.
1988-03-01
Following up an idea put forward at a meeting of the PEN (National Energy Plan) R and D working party held in Ponferrada in the province of Leon, Ocicarbon contracted Geocontrol SA to carry out a study on the possible use of Auger technology for working marginal coal reserves. This article summarises the most important points in the final report on this project: current state of Auger technology, inventory of marginal coal reserves in Spain and the use of Auger technology in Spain. 6 figs., 2 tabs.
Algorithms for computing the multivariable stability margin
Tekawy, Jonathan A.; Safonov, Michael G.; Chiang, Richard Y.
1989-01-01
Stability margin for multiloop flight control systems has become a critical issue, especially in highly maneuverable aircraft designs where there are inherent strong cross-couplings between the various feedback control loops. To cope with this issue, we have developed computer algorithms based on non-differentiable optimization theory. These algorithms have been developed for computing the Multivariable Stability Margin (MSM). The MSM of a dynamical system is the size of the smallest structured perturbation in component dynamics that will destabilize the system. These algorithms have been coded and appear to be reliable. As illustrated by examples, they provide the basis for evaluating the robustness and performance of flight control systems.
Jatropha potential on marginal land in Ethiopia
Wendimu, Mengistu Assefa
narrative. But both the availability and suitability of “marginal” land for commercial level jatropha production is not well understood/examined, especially in Africa. Using a case study of large-scale jatropha plantation in Ethiopia, this paper examines the process of land identification for jatropha...... investments, and the agronomic performance of large-scale jatropha plantation on so-called marginal land. Although it has been argued that jatropha can be grown well on marginal land without irrigation, and thus does not compete for land and water or displace food production from agricultural land, this study...
Heuer M
2010-01-01
Full Text Available Abstract Objective Due to organ shortage, average waiting time for a kidney in Germany is about 4 years after start of dialysis. Number of kidney grafts recovered can only be maintained by accepting older and expanded criteria donors. The aim of this study was to analyse the impact of donor and recipient risk on kidney long-term function. Methods All deceased kidney transplantations were considered. We retrospectively studied 332 patients between 2002 and 2006; divided in 4 groups reflecting donor and recipient risk. Results Non-marginal recipients were less likely to receive a marginal organ (69 of 207, 33% as compared to marginal recipients, of whom two-thirds received a marginal organ (p Conclusions As we were able to show expanded criteria donor has a far bigger effect on long-term graft function than the "extra risk" recipient. Although there have been attempts to define groups of recipients who should be offered ECD kidneys primarily the discussion is still ongoing.
17 CFR 242.405 - Withdrawal of margin.
2010-04-01
...) REGULATIONS M, SHO, ATS, AC, AND NMS AND CUSTOMER MARGIN REQUIREMENTS FOR SECURITY FUTURES Customer Margin Requirements for Security Futures § 242.405 Withdrawal of margin. (a) By the customer. Except as otherwise... account after such withdrawal is sufficient to satisfy the required margin for the security futures...
Rijmen, Frank
2009-01-01
Maximum marginal likelihood estimation of multidimensional item response theory (IRT) models has been hampered by the calculation of the multidimensional integral over the ability distribution. However, the researcher often has a specific hypothesis about the conditional (in)dependence relations among the latent variables. Exploiting these…
BIJELLA Maria Fernanda Borro
2001-01-01
Full Text Available This study evaluated, in vitro, marginal microleakage in class II restorations confected with the glass ionomer cement Vitremer and with the composite resins Ariston pHc and P-60. The aims of the study were to assess the effect of thermocycling on those materials and to evaluate two methods utilized in the analysis of dye penetration. Sixty premolars divided in three groups were utilized; the teeth had proximal cavities whose cervical walls were located 1 mm below the cementoenamel junction. Half of the test specimens from each group underwent thermocycling; the other half remained in deionized water, at 37ºC. The specimens were immersed, for 24 hours, in a basic 0.5% fuchsin solution at 37ºC. For the analysis of microleakage, the specimens were sectioned in a mesio-distal direction, and the observation was carried out with the software Imagetools. The results were evaluated through the 2-way ANOVA and through the Tukey?s test. All groups presented marginal microleakage. The smallest values were obtained with Vitremer, followed by those obtained with the composite resins P-60 and Ariston pHc. There was no statistically significant difference caused by thermocycling, and the method of maximum infiltration was the best for detecting the extension of microleakage.
On the Evolution of Glaciated Continental Margins
Sverre Laberg, Jan; Rydningen, Tom Arne; Safronova, Polina A.; Forwick, Matthias
2016-04-01
Glaciated continental margins, continental margins where a grounded ice sheet repeatedly has been at or near the shelf break, are found at both northern and southern high-latitudes. Their evolution are in several aspects different from their low-latitude counterparts where eustatic sea-level variations possess a fundamental control on their evolution and where fluvial systems provide the main sediment input. From studies of the Norwegian - Barents Sea - Svalbard and NE Greenland continental margins we propose the following factors as the main control on the evolution of glaciated continental margins: 1) Pre-glacial relief controlling the accommodation space, 2) Ice sheet glaciology including the location of fast-flowing ice streams where source area morphology exerts a fundamental control, 3) Composition of the glacigenic sediments where the clay content in previous studies have been found to be important, and 4) Sea-level controlled both by eustacy and isostacy. From three case studies, 1) the western Barents Sea, 2) part of the North Norwegian (Troms), and 3) the Mid-Norwegian margin, the influence on these factors for the sea-floor morphology, sedimentary processes of the continental slope - deep sea and continental margin architecture are discussed. The pre-glacial relief of the mid-Norwegian and Troms margins relates to the onset of rifting and plate break-up from the early Cenozoic while for the SW Barents Sea, plate shear was followed by rifting. A wide zone of extended continental crust occurs offshore mid-Norway while this zone is much narrower offshore Troms leading to a more pronounced pre-glacial relief. Regarding sediment delivery and ice sheet glaciology the western Barents Sea exemplifies very high sediment input corresponding to an estimated average erosion of the source area of ~0.4 mm/yr (SW Barents Sea), much of which is related to subglacial erosion of Mesozoic - Cenozoic sedimentary rocks from large paleo-ice streams. The mid-Norwegian margin
Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing.
Ghesu, Florin C; Krubasik, Edward; Georgescu, Bogdan; Singh, Vivek; Yefeng Zheng; Hornegger, Joachim; Comaniciu, Dorin
2016-05-01
Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45
LITERATURE REVIEW ON MAXIMUM LOADING OF RADIONUCLIDES ON CRYSTALLINE SILICOTITANATE
Adu-Wusu, K.; Pennebaker, F.
2010-10-13
Plans are underway to use small column ion exchange (SCIX) units installed in high-level waste tanks to remove Cs-137 from highly alkaline salt solutions at Savannah River Site. The ion exchange material slated for the SCIX project is engineered or granular crystalline silicotitanate (CST). Information on the maximum loading of radionuclides on CST is needed by Savannah River Remediation for safety evaluations. A literature review has been conducted that culminated in the estimation of the maximum loading of all but one of the radionuclides of interest (Cs-137, Sr-90, Ba-137m, Pu-238, Pu-239, Pu-240, Pu-241, Am-241, and Cm-244). No data was found for Cm-244.
C. H. Nelson
2012-11-01
Full Text Available We summarize the importance of great earthquakes (M_{w} ≳ 8 for hazards, stratigraphy of basin floors, and turbidite lithology along the active tectonic continental margins of the Cascadia subduction zone and the northern San Andreas Transform Fault by utilizing studies of swath bathymetry visual core descriptions, grain size analysis, X-ray radiographs and physical properties. Recurrence times of Holocene turbidites as proxies for earthquakes on the Cascadia and northern California margins are analyzed using two methods: (1 radiometric dating (^{14}C method, and (2 relative dating, using hemipelagic sediment thickness and sedimentation rates (H method. The H method provides (1 the best estimate of minimum recurrence times, which are the most important for seismic hazards risk analysis, and (2 the most complete dataset of recurrence times, which shows a normal distribution pattern for paleoseismic turbidite frequencies. We observe that, on these tectonically active continental margins, during the sea-level highstand of Holocene time, triggering of turbidity currents is controlled dominantly by earthquakes, and paleoseismic turbidites have an average recurrence time of ~550 yr in northern Cascadia Basin and ~200 yr along northern California margin. The minimum recurrence times for great earthquakes are approximately 300 yr for the Cascadia subduction zone and 130 yr for the northern San Andreas Fault, which indicates both fault systems are in (Cascadia or very close (San Andreas to the early window for another great earthquake.
On active tectonic margins with great earthquakes, the volumes of mass transport deposits (MTDs are limited on basin floors along the margins. The maximum run-out distances of MTD sheets across abyssal-basin floors along active margins are an order of magnitude less (~100 km than on passive margins (~1000 km. The great earthquakes along the Cascadia and northern California margins
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Geological features and geophysical signatures of continental margins of India
Krishna, K.S.
and classification of continental margins are in general dependent on style of continental splitting, rifting, subsidence and their proximity to the tectonic plate boundaries, at times the margins undergo for modifications by sediment deposition and volcanic... by Deccan-Reunion hotspot volcanism and Bengal Fan sedimentation respectively. Volcanism has dominated on the western continental margin of India, thereby the margin had been turned into a volcanic passive continental margin, while eastern continental...
5TH BIOTECHNOLOGICAL INVESTIGATIONS OCEAN MARGINS PROGRAM
DR. ARTURO MASSOL, PROGRAM CHAIR; DR. ROSA BUXEDA, PROGRAM CO-CHAIR
2004-01-08
BI-OMP supports DOE's mission in Climate Change Research. The program provides the fundamental understanding of the linkages between carbon and nitrogen cycles in ocean margins. Researchers are providing a mechanistic understanding of these cycles, using the tools of modern molecular biology. The models that will allow policy makers to determine safe levels of greenhouse gases for the Earth System.
Recidiva tardia de linfoma da zona marginal
Rocha,Talita M. B.S.; Bortolheiro,Tereza C.; Costa, Eduardo; Haardt,Daniela; Roberto P. Paes; Chiattone, Carlos S.
2009-01-01
O linfoma de zona marginal é um linfoma de baixo grau com curso clínico indolente e potencial de recidiva.1,2 Apresentamos um caso de recidiva tardia após 25 anos de aparente remissão completa, levantando a possibilidade de recidiva de doença preexistente ou desenvolvimento de novo clone neoplásico.
Large margin image set representation and classification
Wang, Jim Jing-Yan
2014-07-06
In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.
New perceptions of continrntal margin biodiversity
Menot, L.; Sibuet, M.; Carney, R.S.; Levin, L.A.; Rowe, G.T.; Billett, D.S.M.; Poore, G.; Kitazato, H.; Vanreusel, A.; Galeron, J.; Lavrado, H.P.; Sellanes, J.; Ingole, B.S.; Krylova, E.
margins to major marine laboratories in developed countries. Such studies shaped our original, sometimes naïve, conceptions of what lives on these steep depth gradients even though we perceived the deep environment from afar and at a poor resolution...
Sustainable biomass production on Marginal Lands (SEEMLA)
Barbera, Federica; Baumgarten, Wibke; Pelikan, Vincent
2017-04-01
Sustainable biomass production on Marginal Lands (SEEMLA) The main objective of the H2020 funded EU project SEEMLA (acronym for Sustainable Exploitation of Biomass for Bioenergy from Marginal Lands in Europe) is the establishment of suitable innovative land-use strategies for a sustainable production of plant-based energy on marginal lands while improving general ecosystem services. The use of marginal lands (MagL) could contribute to the mitigation of the fast growing competition between traditional food production and production of renewable bio-resources on arable lands. SEEMLA focuses on the promotion of re-conversion of MagLs for the production of bioenergy through the direct involvement of farmers and forester, the strengthening of local small-scale supply chains, and the promotion of plantations of bioenergy plants on MagLs. Life cycle assessment is performed in order to analyse possible impacts on the environment. A soil quality rating tool is applied to define and classify MagL. Suitable perennial and woody bioenergy crops are selected to be grown in pilot areas in the partner countries Ukraine, Greece and Germany. SEEMLA is expected to contribute to an increasing demand of biomass for bioenergy production in order to meet the 2020 targets and beyond.
Female Special Education Directors: Doubly Marginalized.
Keefe, Charlotte Hendrick; Parmley, Pamela
2003-01-01
A qualitative study of five Texas female special education directors found that although participants achieved an administrative position, they were marginalized due to their leadership style, gender discrimination, and socialization. Participants also indicated a negative connection between being in special education administration and top-level…
The marginal cost of public funds
Kleven, Henrik Jacobsen; Kreiner, Claus Thustrup
2006-01-01
This paper extends the theory and measurement of the marginal cost of public funds (MCF) to account for labor force participation responses. Our work is motivated by the emerging consensus in the empirical literature that extensive (participation) responses are more important than intensive (hours...
RISK-INFORMED SAFETY MARGIN CHARACTERIZATION
Nam Dinh; Ronaldo Szilard
2009-07-01
The concept of safety margins has served as a fundamental principle in the design and operation of commercial nuclear power plants (NPPs). Defined as the minimum distance between a system’s “loading” and its “capacity”, plant design and operation is predicated on ensuring an adequate safety margin for safety-significant parameters (e.g., fuel cladding temperature, containment pressure, etc.) is provided over the spectrum of anticipated plant operating, transient and accident conditions. To meet the anticipated challenges associated with extending the operational lifetimes of the current fleet of operating NPPs, the United States Department of Energy (USDOE), the Idaho National Laboratory (INL) and the Electric Power Research Institute (EPRI) have developed a collaboration to conduct coordinated research to identify and address the technological challenges and opportunities that likely would affect the safe and economic operation of the existing NPP fleet over the postulated long-term time horizons. In this paper we describe a framework for developing and implementing a Risk-Informed Safety Margin Characterization (RISMC) approach to evaluate and manage changes in plant safety margins over long time horizons.
Marginal Strength of Collarless Metal Ceramic Crown
Sikka Swati
2010-01-01
fracture strength at margins of metal ceramic crowns cemented to metal tooth analogs. Crowns evaluated with different marginal configurations, shoulder and shoulder bevel with 0 mm, 0.5 mm, 1 mm, and 1.5 mm, were selected. Methods. Maxillary right canine typhodont tooth was prepared to receive a metal ceramic crown with shoulder margin. This was duplicated to get 20 metal teeth analogs. Then the same tooth was reprepared to get shoulder bevel configuration. These crowns were then cemented onmetal teeth analogs and tested for fracture strength atmargin on an Instron testing machine. A progressive compressive load was applied using 6.3 mm diameter rod with crosshead speed of 2.5 mm per minute. Statisticaly analysis was performed with ANOVA, Student's “t” test and “f” test. Results. The fracture strength of collarless metal ceramic crowns under study exceeded the normal biting force. Therefore it can be suggested that collarless metal ceramic crowns with shoulder or shoulder bevel margins up to 1.5 mm framework reduction may be indicated for anteriormetal ceramic restorations. Significance. k Collarless metal ceramic crowns have proved to be successful for anterior fixed restorations. Hence, it may be subjected to more clinical trials.
Negative Stress Margins - Are They Real?
Raju, Ivatury S.; Lee, Darlene S.; Mohaghegh, Michael
2011-01-01
Advances in modeling and simulation, new finite element software, modeling engines and powerful computers are providing opportunities to interrogate designs in a very different manner and in a more detailed approach than ever before. Margins of safety are also often evaluated using local stresses for various design concepts and design parameters quickly once analysis models are defined and developed. This paper suggests that not all the negative margins of safety evaluated are real. The structural areas where negative margins are frequently encountered are often near stress concentrations, point loads and load discontinuities, near locations of stress singularities, in areas having large gradients but with insufficient mesh density, in areas with modeling issues and modeling errors, and in areas with connections and interfaces, in two-dimensional (2D) and three-dimensional (3D) transitions, bolts and bolt modeling, and boundary conditions. Now, more than ever, structural analysts need to examine and interrogate their analysis results and perform basic sanity checks to determine if these negative margins are real.
Mundhulens mikroflora hos patienter med marginal parodontitis
Larsen, Tove; Fiehn, Nils-Erik
2011-01-01
Viden om marginal parodontitis’ mikrobiologi tog for alvor fart for ca. 40 år siden. Den tidlige viden var baseret på mikroskopiske og dyrkningsmæssige undersøgelser af den subgingivale plak. Anvendelsen af de nyere molekylærbiologiske metoder har betydet, at vor viden om de ætiologiske faktorer...
Processes of marginalization in relation to participation
Lagermann, Laila Colding
2011-01-01
This paper discusses processes of marginalization in relation to the participation of two students, Amir and Saad, in the school in Copenhagen, Denmark, which they attend but also across the school and different communities outside the school. In the paper I discuss the effect of some teachers...
An information criterion for marginal structural models.
Platt, Robert W; Brookhart, M Alan; Cole, Stephen R; Westreich, Daniel; Schisterman, Enrique F
2013-04-15
Marginal structural models were developed as a semiparametric alternative to the G-computation formula to estimate causal effects of exposures. In practice, these models are often specified using parametric regression models. As such, the usual conventions regarding regression model specification apply. This paper outlines strategies for marginal structural model specification and considerations for the functional form of the exposure metric in the final structural model. We propose a quasi-likelihood information criterion adapted from use in generalized estimating equations. We evaluate the properties of our proposed information criterion using a limited simulation study. We illustrate our approach using two empirical examples. In the first example, we use data from a randomized breastfeeding promotion trial to estimate the effect of breastfeeding duration on infant weight at 1 year. In the second example, we use data from two prospective cohorts studies to estimate the effect of highly active antiretroviral therapy on CD4 count in an observational cohort of HIV-infected men and women. The marginal structural model specified should reflect the scientific question being addressed but can also assist in exploration of other plausible and closely related questions. In marginal structural models, as in any regression setting, correct inference depends on correct model specification. Our proposed information criterion provides a formal method for comparing model fit for different specifications.
Marginal adaptation of ceramic inserts after cementation
Ozcan, M; Pfeiffer, P; Nergiz, [No Value
2002-01-01
The advantage of using ceramic inserts is to prevent major drawbacks of composite resins such as polymerization shrinkage, wear and microleakage. This in vitro study evaluated the marginal adaptation of two approximal ceramic insert systems after cementation to the cavities opened with ultrasonic ti
The marginal costs of climate changing emissions
Tol, R.S.J.; Downing, T.E.
2004-01-01
This paper presents the marginal costs of the emissions of a selected number of radiatively-active gases, three uniformly-mixed gases – carbon dioxide, methane, nitrous oxide – and two region-specific gases – nitrogen (from aircraft) and sulphur, which influence ozone and sulphate aerosol concentrat
On the concept and process of marginalization
J.B.W. Kuitenbrouwer (Joost)
1973-01-01
textabstractThe concept of marginalization has its genesis in the processes of transformation which have characterized the societies of Latin America (CEPAL). It is increasingly being used to denote similar processes in other parts of the world through which groups of the population are relegated to
Early math intervention for marginalized students
Overgaard, Steffen; Tonnesen, Pia Beck
2015-01-01
This study is one of more substudies in the project Early Math Intervention for Marginalized Students (TMTM2014). The paper presents the initial process of this substudy that will be carried out fall 2015. In the TMTM2014 project, 80 teachers, who completed a one week course in the idea of TMTM...
Structural Marginality and the Urban Social Order.
Kapferer, Bruce
1978-01-01
This article argues for a redefinition of "Marginality" in terms of the principles that influence the developing order of the urban formation as a whole. The emerging social order and the political participation of residents of two shanty areas in Kabwe, Zambia are traced over a period of 40 years. (Author/EB)
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Geomorphologic Structures on the South Cretan Margin, Greece
Nomikou, Paraskevi; Lykousis, Vasilis; Alexandri, Matina; Rousakis, Grigoris; Sakellariou, Dimitris; Lampridou, Danai; Alves, Tiago; Ballas, Dionysios
2015-04-01
the central southern margin, south of Messara basin. It has a main central axis orientated ENE-WSW, a maximum depth of 2600m and is bounded by E-W fault zones. On the other hand, Gavdos Rise occupies a major part of the South Cretan Margin. It is bordered by longitudinal troughs with steep slopes. Two intraslope basins are also distinguishable at the southwestern part of the Rise, with depths 1100 and 2000 m respectively. The gentler slopes of the Rise are relatively channel-free with low morphological values. The very detailed illustration of the bathymetry and morphology of the South Cretan Margin in addition with the study of the canyon system reflects the offshore active tectonics and faulting of the seafloor and the overall deformation since Middle Miocene, in association with the general extension of the South Aegean region.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
The pre-Caledonian margin of Baltica
Andersen, Torgeir B.; Jørgen Kjøll, Hans; Jakob, Johannes; Corfu, Fernando; Tegner, Christian
2017-04-01
It is well-documented that the pre-Caledonian margin of Baltica constituted a several hundred-km wide and more than 2000 km long passive margin. Its vestiges occur at low- to intermediate structural levels in the mountain belt, and are variably overprinted by the early- to end-Caledonian orogenic deformation and extension. Attempts to reconstruct the Caledonian margin of Baltica must be based on detailed maps integrated with studies of the rock-complexes that originally constituted the passive margin. The proximal parts of pre-Caledonian margin of Baltica are dominated by continental rift basins with coarse to fine-grained sediments deposited in the late Proterozoic through the Ediacaran and into the Lower Palaeozoic. The youngest dated clastic zircons probably record magmatism associated with initial contraction near or in the distal margin. The 'margin nappes' also comprise Baltican basement slivers and coarse to fine-grained sedimentary units as well as deep-marine basin deposits. A major change in the architecture of the passive margin units takes place across a transvers zone, which is sub-parallel to the present-day Gudbrandsdalen of South Norway. The transition is roughly parallel to the major basement lineament of the Sveconorwegian orogenic front in south Norway. The most important change across this transverse lineament is that the NE segment is magma-rich, characterized by abundant basaltic magmatism. The SW segment is magma-poor, and characterised by numerous (>100) solitary meta-peridotites, mostly meta-dunites and meta-harzburgites as well as a number of detrital serpentinites and soapstones. These are interpreted as fragments of exhumed mantle and their erosion products, respectively. The meta-peridotites emplaced structurally, and covered by dominantly deep-basin sediments, but also by coarser sedimentary breccias and conglomerates, as part of the rifted margin development. This mixed unit (mélange) was locally intruded by Late Cambrian to Early
Marginal integrity evaluation of dental composite using optical coherence tomography
Stan, Adrian-Tudor; Cojocariu, Andreea-Codruta; Antal, Anca Adriana; Topala, Florin; Sinescu, Cosmin; Negrutiu, Meda Lavinia; Duma, Virgil-Florin; Podoleanu, Adrian Gh.
2016-03-01
In clinical dental practice it is often difficult or even impossible to distinguish and control interfacial adhesive defects from adhesive restorations using visual inspection or other traditional diagnostic methods. Nonetheless, non-invasive biomedical imaging methods like Optical Coherence Tomography (OCT) may provide a better view in this diagnostic outline. The aim of this study is to explore evaluations of the marginal adaptation of class I resin composites restorations using Time Domain (TD) OCT. Posterior human teeth have been chosen for this study. The teeth were stored in 0.9% physiological saline solution prior to use. A classical round-shaped class I cavity was prepared and cavities were restored with Charisma Diamond composite by Heraeus Kulzer and using a system of etch and rinse boding. The specimens were subjected to water storage and then to thermo-cycling. Three dimensional (3-D) scans of the restoration were obtained using a TD-OCT system centered at a 1300 nm wavelength. Open marginal adaptation at the interfaces and gaps inside the composite resins materials were identified using the proposed method. In conclusion, OCT has numerous advantages which justify its use for in vitro, as well as for in vivo studies. It can therefore be considered for non-invasive and fast detection of gaps at the restoration interface.
Marginal Shape Deep Learning: Applications to Pediatric Lung Field Segmentation.
Mansoor, Awais; Cerrolaza, Juan J; Perez, Geovanny; Biggs, Elijah; Nino, Gustavo; Linguraru, Marius George
2017-02-11
Representation learning through deep learning (DL) architecture has shown tremendous potential for identification, localization, and texture classification in various medical imaging modalities. However, DL applications to segmentation of objects especially to deformable objects are rather limited and mostly restricted to pixel classification. In this work, we propose marginal shape deep learning (MaShDL), a framework that extends the application of DL to deformable shape segmentation by using deep classifiers to estimate the shape parameters. MaShDL combines the strength of statistical shape models with the automated feature learning architecture of DL. Unlike the iterative shape parameters estimation approach of classical shape models that often leads to a local minima, the proposed framework is robust to local minima optimization and illumination changes. Furthermore, since the direct application of DL framework to a multi-parameter estimation problem results in a very high complexity, our framework provides an excellent run-time performance solution by independently learning shape parameter classifiers in marginal eigenspaces in the decreasing order of variation. We evaluated MaShDL for segmenting the lung field from 314 normal and abnormal pediatric chest radiographs and obtained a mean Dice similarity coefficient of 0.927 using only the four highest modes of variation (compared to 0.888 with classical ASM(1) (p-value=0.01) using same configuration). To the best of our knowledge this is the first demonstration of using DL framework for parametrized shape learning for the delineation of deformable objects.
An efficient approximation algorithm for finding a maximum clique using Hopfield network learning.
Wang, Rong Long; Tang, Zheng; Cao, Qi Ping
2003-07-01
In this article, we present a solution to the maximum clique problem using a gradient-ascent learning algorithm of the Hopfield neural network. This method provides a near-optimum parallel algorithm for finding a maximum clique. To do this, we use the Hopfield neural network to generate a near-maximum clique and then modify weights in a gradient-ascent direction to allow the network to escape from the state of near-maximum clique to maximum clique or better. The proposed parallel algorithm is tested on two types of random graphs and some benchmark graphs from the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS). The simulation results show that the proposed learning algorithm can find good solutions in reasonable computation time.
Leukocyte margination in a model microvessel
Freund, Jonathan B.
2007-02-01
The physiological inflammation response depends upon the multibody interactions of blood cells in the microcirculation that bring leukocytes (white blood cells) to the vessel walls. We investigate the fluid mechanics of this using numerical simulations of 29 red blood cells and one leukocyte flowing in a two-dimensional microvessel, with the cells modeled as linearly elastic shell membranes. Despite its obvious simplifications, this model successfully reproduces the increasingly blunted velocity profiles and increased leukocyte margination observed at lower shear rates in actual microvessels. Red cell aggregation is shown to be unnecessary for margination. The relative stiffness of the red cells in our simulations is varied by over a factor of 10, but the margination is found to be much less correlated with this than it is to changes associated with the blunting of the mean velocity profile at lower shear rates. While velocity around the leukocyte when it is near the wall depends upon the red cell properties, it changes little for strongly versus weakly marginating cases. In the more strongly marginating cases, however, a red cell is frequently observed to be leaning on the upstream side of the leukocyte and appears to stabilize it, preventing other red cells from coming between it and the wall. A well-known feature of the microcirculation is a near-wall cell-free layer. In our simulations, it is observed that the leukocyte's most probable position is at the edge of this layer. This wall stand-off distance increases with velocity following a scaling that would be expected for a lubrication mechanism, assuming that there were a nearly constant force pushing the cells toward the wall. The leukocyte's near-wall position is observed to be less stable with increasing mean stand-off distance, but this distance would have potentially greater effect on adhesion since the range of the molecular binding is so short.
Leonardo HernÃ¡ndez
1992-03-01
Full Text Available Financial Intermediation, Monetary Uncertainty, and Bank Interest Margins This paper studies a simple model of financial intermediation in order to understand how the lending-borrowing spread or interest margin charged by financial intermediaries is determined in equilibrium in a monetary economy. The main conclusion of the paper concerns the effect on the spread of changes in the distribution of monetary innovations. Thus, changes in the monetary-policy-rule followed by the Central Bank which alter the volatility of inflation will have important effects on the interest margin and also on the amount of credit available to investors. A crosssection empirical analysis strongly supports our hypothesis:
Integrable Hopf twists, marginal deformations and generalised geometry
Dlamini, Hector
2016-01-01
We study the symmetries of an N=1 superconformal marginal deformation of the N=4 SYM theory which depends on a real parameter w. It is a special case of the two-complex-parameter Leigh-Strassler family of superconformal deformations of N=4 SYM, which is one-loop planar-integrable. On the gauge theory side of the AdS/CFT correspondence, we construct the Hopf twist leading to the deformed global symmetry of the theory and use it to define a star product between its three scalar superfields. Turning to the gravity side of the correspondence, we adapt the above star product to deform the pure spinors of six-dimensional flat space in its generalised geometry description. This leads us to a new N=2 NS-NS solution of IIB supergravity. Starting from this precursor solution, adding D3-branes and taking the near-horizon limit leads us to an exact AdS_5x(S^5)_w solution which we conjecture to be the gravity dual of the w-deformed gauge theory. Unlike the dual to the beta-deformed Leigh-Strassler theory, the internal par...
Sequential Batch Design for Gaussian Processes Employing Marginalization †
Roland Preuss
2017-02-01
Full Text Available Within the Bayesian framework, we utilize Gaussian processes for parametric studies of long running computer codes. Since the simulations are expensive, it is necessary to exploit the computational budget in the best possible manner. Employing the sum over variances —being indicators for the quality of the fit—as the utility function, we establish an optimized and automated sequential parameter selection procedure. However, it is also often desirable to utilize the parallel running capabilities of present computer technology and abandon the sequential parameter selection for a faster overall turn-around time (wall-clock time. This paper proposes to achieve this by marginalizing over the expected outcomes at optimized test points in order to set up a pool of starting values for batch execution. For a one-dimensional test case, the numerical results are validated with the analytical solution. Eventually, a systematic convergence study demonstrates the advantage of the optimized approach over randomly chosen parameter settings.
Scalable Large-Margin Mahalanobis Distance Metric Learning
Shen, Chunhua; Wang, Lei
2010-01-01
For many machine learning algorithms such as $k$-Nearest Neighbor ($k$-NN) classifiers and $ k $-means clustering, often their success heavily depends on the metric used to calculate distances between different data points. An effective solution for defining such a metric is to learn it from a set of labeled training samples. In this work, we propose a fast and scalable algorithm to learn a Mahalanobis distance metric. By employing the principle of margin maximization to achieve better generalization performances, this algorithm formulates the metric learning as a convex optimization problem and a positive semidefinite (psd) matrix is the unknown variable. a specialized gradient descent method is proposed. our algorithm is much more efficient and has a better performance in scalability compared with existing methods. Experiments on benchmark data sets suggest that, compared with state-of-the-art metric learning algorithms, our algorithm can achieve a comparable classification accuracy with reduced computation...
Faster Rates for training Max-Margin Markov Networks
Zhang, Xinhua; Vishwanathan, S V N
2010-01-01
Structured output prediction is an important machine learning problem both in theory and practice, and the max-margin Markov network (\\mcn) is an effective approach. All state-of-the-art algorithms for optimizing \\mcn\\ objectives take at least $O(1/\\epsilon)$ number of iterations to find an $\\epsilon$ accurate solution. Recent results in structured optimization suggest that faster rates are possible by exploiting the structure of the objective function. Towards this end \\citet{Nesterov05} proposed an excessive gap reduction technique based on Euclidean projections which converges in $O(1/\\sqrt{\\epsilon})$ iterations on strongly convex functions. Unfortunately when applied to \\mcn s, this approach does not admit graphical model factorization which, as in many existing algorithms, is crucial for keeping the cost per iteration tractable. In this paper, we present a new excessive gap reduction technique based on Bregman projections which admits graphical model factorization naturally, and converges in $O(1/\\sqrt{...
A flexible annealing chaotic neural network to maximum clique problem.
Yang, Gang; Tang, Zheng; Zhang, Zhiqiang; Zhu, Yunyi
2007-06-01
Based on the analysis and comparison of several annealing strategies, we present a flexible annealing chaotic neural network which has flexible controlling ability and quick convergence rate to optimization problem. The proposed network has rich and adjustable chaotic dynamics at the beginning, and then can converge quickly to stable states. We test the network on the maximum clique problem by some graphs of the DIMACS clique instances, p-random and k random graphs. The simulations show that the flexible annealing chaotic neural network can get satisfactory solutions at very little time and few steps. The comparison between our proposed network and other chaotic neural networks denotes that the proposed network has superior executive efficiency and better ability to get optimal or near-optimal solution.
Semiclassical decay of strings with maximum angular momentum
Iengo, R; Iengo, Roberto; Russo, Jorge G.
2003-01-01
A highly excited (closed or open) string state on the leading Regge trajectory can be represented by a rotating soliton solution. There is a semiclassical probability per unit cycle that this string can spontaneously break into two pieces. Here we find the resulting solutions for the outgoing two pieces, which describe two specific excited string states, and show that this semiclassical picture reproduces very accurately the features of the quantum calculation of decay in the large mass M limit. In particular, this picture prescribes the precise analytical relation of the masses M_1 and M_2 of the decay products, and indicates that the lifetime of these string states grows with the mass as T= const. a' M, in agreement with the quantum calculation. Thus, surprisingly, a string with maximum angular momentum becomes more stable for larger masses. We also point out some interesting features of the evolution after the splitting process.
Numerical Models of Salt Tectonics and Associated Thermal Evolution of Rifted Continental Margins
Goteti, R.; Beaumont, C.; Ings, S. J.
2011-12-01
sediment loading results in well-developed minibasins and intervening salt diapirs. The seaward flow of salt, and down-dip contraction owing to the evolving margin tilt, result in wider diapirs at the seaward end of the basin; a geometry consistent with observations in rifted margins (e.g., south Atlantic margins). Salt flow results in the migration of the associated thermal perturbation. The expulsion of salt by the sediment loading results in higher temperatures beneath the grounded minibasins. However, owing to the thermal focusing of heat by salt, the minibasins remain significantly cooler than the sediments outside the salt basin where standard geothermal gradients are established as the margin cools. Under these circumstances hydrocarbons generation will be delayed until sediments of significant thickness are deposited in salt minibasins. With all other parameters being equal, the down-dip flow of salt, maximum temperature (thickness) attained in the basin are higher for narrower margins. The thermal refraction of heat owing to the salt basin observed in our models has important implications for thermal and petroleum systems modeling of rifted margin sedimentary basins.
Paulo Henrique Siqueira
2004-08-01
Full Text Available O objetivo deste trabalho é mostrar a aplicação do Algoritmo do Matching de peso máximo, na elaboração de jornadas de trabalho para motoristas e cobradores de ônibus. Este problema deve ser resolvido levando-se em consideração o maior aproveitamento possível das tabelas de horários, com o objetivo de minimizar o número de funcionários, de horas extras e de horas ociosas. Desta forma, os custos das companhias de transporte público são minimizados. Na primeira fase do trabalho, supondo-se que as tabelas de horários já estejam divididas em escalas de curta e de longa duração, as escalas de curta duração são combinadas para a formação da jornada diária de trabalho de um funcionário. Esta combinação é feita com o Algoritmo do Matching de peso máximo, no qual as escalas são representadas por vértices de um grafo, e o peso máximo é atribuído às combinações de escalas que não formam horas extras e horas ociosas. Na segunda fase, uma jornada de final de semana é designada para cada jornada semanal de dias úteis. Por meio destas duas fases, as jornadas semanais de trabalho para motoristas e cobradores de ônibus podem ser construídas com custo mínimo. A terceira e última fase deste trabalho consiste na designação das jornadas semanais de trabalho para cada motorista e cobrador de ônibus, considerando-se suas preferências. O Algoritmo do Matching de peso máximo é utilizado para esta fase também. Este trabalho foi aplicado em três empresas de transporte público da cidade de Curitiba - PR, nas quais os algoritmos utilizados anteriormente eram heurísticos, baseados apenas na experiência do encarregado por esta tarefa.The purpose of this paper is to discuss how the maximum weight Matching Algorithm can be applied to schedule the workdays of bus drivers and bus fare collectors. This scheduling should be based on the best possible use of timetables in order to minimize the number of employees, overtime and
S Srikanth Reddy
2013-01-01
Full Text Available Background: Conventional casting technique is time consuming when compared to accelerated casting technique. In this study, marginal accuracy of castings fabricated using accelerated and conventional casting technique was compared. Materials and Methods: 20 wax patterns were fabricated and the marginal discrepancy between the die and patterns were measured using Optical stereomicroscope. Ten wax patterns were used for Conventional casting and the rest for Accelerated casting. A Nickel-Chromium alloy was used for the casting. The castings were measured for marginal discrepancies and compared. Results: Castings fabricated using Conventional casting technique showed less vertical marginal discrepancy than the castings fabricated by Accelerated casting technique. The values were statistically highly significant. Conclusion: Conventional casting technique produced better marginal accuracy when compared to Accelerated casting. The vertical marginal discrepancy produced by the Accelerated casting technique was well within the maximum clinical tolerance limits. Clinical Implication: Accelerated casting technique can be used to save lab time to fabricate clinical crowns with acceptable vertical marginal discrepancy.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Adaptive Marginal Median Filter for Colour Images
Almanzor Sapena
2011-03-01
Full Text Available This paper describes a new filter for impulse noise reduction in colour images which is aimed at improving the noise reduction capability of the classical vector median filter. The filter is inspired by the application of a vector marginal median filtering process over a selected group of pixels in each filtering window. This selection, which is based on the vector median, along with the application of the marginal median operation constitutes an adaptive process that leads to a more robust filter design. Also, the proposed method is able to process colour images without introducing colour artifacts. Experimental results show that the images filtered with the proposed method contain less noisy pixels than those obtained through the vector median filter.
Distributions with given marginals and statistical modelling
Fortiana, Josep; Rodriguez-Lallena, José
2002-01-01
This book contains a selection of the papers presented at the meeting `Distributions with given marginals and statistical modelling', held in Barcelona (Spain), July 17-20, 2000. In 24 chapters, this book covers topics such as the theory of copulas and quasi-copulas, the theory and compatibility of distributions, models for survival distributions and other well-known distributions, time series, categorical models, definition and estimation of measures of dependence, monotonicity and stochastic ordering, shape and separability of distributions, hidden truncation models, diagonal families, orthogonal expansions, tests of independence, and goodness of fit assessment. These topics share the use and properties of distributions with given marginals, this being the fourth specialised text on this theme. The innovative aspect of the book is the inclusion of statistical aspects such as modelling, Bayesian statistics, estimation, and tests.
Time Domain Stability Margin Assessment Method
Clements, Keith
2017-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation.
Rigidity of marginally outer trapped 2-spheres
Galloway, Gregory J
2015-01-01
In a matter-filled spacetime, perhaps with positive cosmological constant, a stable marginally outer trapped 2-sphere must satisfy a certain area inequality. Namely, as discussed in the paper, its area must be bounded above by $4\\pi/c$, where $c > 0$ is a lower bound on a natural energy-momentum term. We then consider the rigidity that results for stable, or weakly outermost, marginally outer trapped 2-spheres that achieve this upper bound on the area. In particular, we prove a splitting result for 3-dimensional initial data sets analogous to a result of Bray, Brendle and Neves [10] concerning area minimizing 2-spheres in Riemannian 3-manifolds with positive scalar curvature. We further show that these initial data sets locally embed as spacelike hypersurfaces into the Nariai spacetime. Connections to the Vaidya spacetime and dynamical horizons are also discussed.
Passive target tracking using marginalized particle filter
无
2007-01-01
A marginalized particle filtering(MPF)approach is proposed for target tracking under the background of passive measurement.Essentially,the MPF is a combination of particle filtering technique and Kalman filter.By making full use of marginalization,the distributions of the tractable linear part of the total state variables are updated analytically using Kalman filter,and only the lower-dimensional nonlinear state variable needs to be dealt with using particle filter.Simulation studies are performed on an illustrative example,and the results show that the MPF method leads to a significant reduction of the tracking errors when compared with the direct particle implementation.Real data test results also validate the effectiveness of the presented method.
Marginal Loss Calculations for the DCOPF
Eldridge, Brent [Federal Energy Regulatory Commission, Washington, DC (United States); Johns Hopkins Univ., Baltimore, MD (United States); O' Neill, Richard P. [Federal Energy Regulatory Commission, Washington, DC (United States); Castillo, Andrea R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-12-05
The purpose of this paper is to explain some aspects of including a marginal line loss approximation in the DCOPF. The DCOPF optimizes electric generator dispatch using simplified power flow physics. Since the standard assumptions in the DCOPF include a lossless network, a number of modifications have to be added to the model. Calculating marginal losses allows the DCOPF to optimize the location of power generation, so that generators that are closer to demand centers are relatively cheaper than remote generation. The problem formulations discussed in this paper will simplify many aspects of practical electric dispatch implementations in use today, but will include sufficient detail to demonstrate a few points with regard to the handling of losses.
European refiners re-adjust margins strategy
Gonzalez, R.G. [ed.
1996-05-01
Refiners in Europe are adjusting operating strategies to reflect the volatilities of tight operating margins. From the unexpected availability of quality crudes (e.g., Brent, 0.3% sulfur), to the role of government in refinery planning, the European refining industry is positioning itself to reverse the past few years of steadily declining profitability. Unlike expected increases in US gasoline demand, European gasoline consumption is not expected to increase, and heavy fuel oil consumption is also declining. However, diesel fuel consumption is expected to increase, even though diesel processing capacity has recently decreased (i.e., more imports). Some of the possible strategies that Europeans may adapt to improve margins and reduce volatility include: Increase conversion capacity to supply growing demand for middle distillates and LPG; alleviate refinery cash flow problems with alliances; and direct discretionary investment toward retail merchandising (unless there is a clear trend toward a widening of the sweet-sour crude price differential).
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Marginal longitudinal semiparametric regression via penalized splines
Al Kadiri, M.
2010-08-01
We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.
Neotectonics in the northern equatorial Brazilian margin
Rossetti, Dilce F.; Souza, Lena S. B.; Prado, Renato; Elis, Vagner R.
2012-08-01
An increasing volume of publications has addressed the role of tectonics in inland areas of northern Brazil during the Neogene and Quaternary, despite its location in a passive margin. Hence, northern South America plate in this time interval might have not been as passive as usually regarded. This proposal needs further support, particularly including field data. In this work, we applied an integrated approach to reveal tectonic structures in Miocene and late Quaternary strata in a coastal area of the Amazonas lowland. The investigation, undertaken in Marajó Island, mouth of the Amazonas River, consisted of shallow sub-surface geophysical data including vertical electric sounding and ground penetrating radar. These methods were combined with morphostructural analysis and sedimentological/stratigraphic data from shallow cores and a few outcrops. The results revealed two stratigraphic units, a lower one with Miocene age, and an upper one of Late Pleistocene-Holocene age. An abundance of faults and folds were recorded in the Miocene deposits and, to a minor extent, in overlying Late Pleistocene-Holocene strata. In addition to characterize these structures, we discuss their origin, considering three potential mechanisms: Andean tectonics, gravity tectonics related to sediment loading in the Amazon Fan, and rifting at the continental margin. Amongst these hypotheses, the most likely is that the faults and folds recorded in Marajó Island reflect tectonics associated with the history of continental rifting that gave rise to the South Atlantic Ocean. This study supports sediment deposition influenced by transpression and transtension associated with strike-slip divergence along the northern Equatorial Brazilian margin in the Miocene and Late Pleistocene-Holocene. This work records tectonic evidence only for the uppermost few ten of meters of this sedimentary succession. However, available geological data indicate a thickness of up to 6 km, which is remarkably thick for
Negative marginal tax rates and heterogeneity
Choné, Philippe; Laroque, Guy
2009-01-01
Heterogeneity is likely to be an important determinant of the shape of optimal tax schemes. This article addresses the issue in a model à la Mirrlees with a continuum of agents. The agents differ in their productivities and opportunity costs of work, but their labor supplies depend only on a unidimensional combination of their two characteristics. Conditions are given under which the standard result that marginal tax rates are everywhere non-negative holds. This is in particular the case when...
Mining their own Business in the Margins
Jensen, Lars
Mining has long been established in Australian public discourse as an activity that has driven the Australian economy, and guaranteed Australia against the economic ills of the rest of the West. Or put slightly differently, the positive spin on mining in public discourse and the financial market,...... of speaking about margins/marginalisation in relation to the mining industry, that is, as something conducted beyond the horizon, something which defines the horizon - and as a process through which remoteness defines the (national) self....
Authigenic minerals from the continental margins
Rao, V.P.
phosphorites have been presumed to be sedimented plankton organic matter, fish debris, and iron-redox phosphate pump. Several workers investigated the genesis of sedimentary phosphorites occurring from Precambrian to Recent and proposed different...-phosphate sediments of the western continental margin of India showed that phosphate occurred as apatite microparticles that resembled fossilized phosphate bacteria and/or microbial filaments (Fig. 3). This established the prominence of micro- environments...
PENDIDIKAN ALTERNATIF UNTUK PEREMPUAN MARGINAL DI PEDESAAN
Ratnawati Tahir
2011-11-01
Full Text Available Abstract: Alternative Education for Marginalized Women in Rural Areas. The study aims to find alternative forms of education for marginalized women, the process of forming study groups and gender based learning process that serves the center of the development of education, leadership and a source of economic empowerment. The study uses qualitative methods, involving a group of women who have attended an alternative education. Researchers and informants from community leaders. The results showed that the form of alternative education is a method of adult education or andragogy. Study groups consisted of basic literacy and functional literacy. The learning process begins with the sharing of learning, reflection on life experience and role play method. The result is 65% of participants have increased the ability of reading, writing and numeracy, and understanding of the issues of women who have confidence in the decision making of households and communities. Abstrak: Pendidikan Alternatif untuk Perempuan Marginal di Pedesaan. Penelitian ini bertujuan mengetahui bentuk pendidikan alternatif untuk perempuan marginal, proses pembentukan kelompok belajar, dan proses pembelajaran berperspektif gender yang berfungsi menjadi pusat pengembangan pendidikan, kepemimpinan, dan sumber penguatan ekonomi. Penelitian menggunakan metode kualitatif, mengambil satu kelompok perempuan yang telah mengikuti pendidikan alternatif. Informan terdiri atas tokoh masyarakat, seperti Kepala Desa, Ketua RT/RW, dan ibu rumah tangga. Hasil penelitian menunjukkan bahwa bentuk pembelajaran pendidikan alternatif adalah metode pendidikan orang dewasa atau andragogy. Pembentukan kelompok belajar terdiri atas; kelompok baca tulis dan keaksaraan fungsional. Proses pembelajaran dimulai dengan sharing pembelajaran, refleksi pengalaman hidup, dan metode role play. Hasilnya 65% peserta pembelajaran mengalami peningkatan kemampuan membaca, menulis, dan berhitung, serta pema
Statistical Mechanics of Soft Margin Classifiers
Risau-Gusman, Sebastian; Gordon, Mirta B.
2001-01-01
We study the typical learning properties of the recently introduced Soft Margin Classifiers (SMCs), learning realizable and unrealizable tasks, with the tools of Statistical Mechanics. We derive analytically the behaviour of the learning curves in the regime of very large training sets. We obtain exponential and power laws for the decay of the generalization error towards the asymptotic value, depending on the task and on general characteristics of the distribution of stabilities of the patte...
Marginal Ice Zone: Biogeochemical Sampling with Gliders
2015-09-30
ocean and base of marine food webs, are responding to changing conditions in the Arctic Ocean . The high-level project goals are to use underwater...observing array (Fig. 1). The glider sensor suite included temperature , temperature microstructure, salinity , oxygen, chlorophyll fluorescence, optical...presented at the Arctic Science Summit Week in Toyama, Japan, in April 2015 (Direct Observations of Ocean Variability Across the Arctic Marginal Ice Zone
Marginal microfiltration in amalgam restorations. Review
Lahoud Salem, Víctor; Departamento Académico de Estomatología Rehabilitadora. Facultad de Odontología, Universidad Nacional Mayor de San Marcos, Lima. Perú.
2014-01-01
The present articule is review references from phenomenon of microfiltration in restorations with amalgam and yours consecuents in changes of color in the interface tooth-restorations, margin deterioted , sensitivity dentinarea postoperate, caries secondary and pulp inflamation. Besides naming the mechanicals for to reduce microfiltration, and yours effects for use of sealers dentinaries representation for the varnish cavitys and adhesive systens Conclusive indicate wath the amalgam is the ma...
Marginal selenium status in northern Tasmania.
Beckett, Jeffrey M; Ball, Madeleine J
2011-09-01
Se plays many important roles in humans. Marginal Se status has been associated with adverse health effects including an increased risk of chronic disease such as cancer. There are few Australian data, but the population of Tasmania, Australia, is potentially at risk of marginal Se status. A cross-sectional study of 498 men and women aged 25-84 years was undertaken to assess the Se status of the northern Tasmanian population. Se status was assessed using dietary estimates and measures of serum Se and glutathione peroxidase (GPx). Mean Se intakes were 77·4 (sd 31·3) and 65·1 (sd 23·7) μg/d for men and women, respectively; 27 % of the subjects consumed less than the Australian/New Zealand estimated average requirement. Mean serum Se concentration was 89·1 (sd 15·1) μg/l; 83 % of the study subjects had serum Se concentrations below 100 μg/l and 60 % had serum Se concentration below 90 μg/l, suggesting that Se status in many subjects was inadequate for maximal GPx activity. This was supported by the positive association between serum Se and serum GPx (P < 0·001), indicating that enzyme activity was limited by Se concentrations. The lowest mean serum Se concentrations were observed in the oldest age ranges; however, the prevalence of marginal Se status was similar across age ranges and did not appear to be influenced by sex or socio-economic status. The prevalence of marginal Se status was high in all sex and age subgroups, suggesting that the northern Tasmanian population could benefit from increasing Se intakes.
Marginal longitudinal semiparametric regression via penalized splines.
Kadiri, M Al; Carroll, R J; Wand, M P
2010-08-01
We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.
Influence of different restorative techniques on marginal seal of class II composite restorations
Sinval Adalberto Rodrigues Junior
2010-02-01
Full Text Available OBJECTIVE: To evaluate the gingival marginal seal in class II composite restorations using different restorative techniques. MATERIAL AND METHODS: Class II box cavities were prepared in both proximal faces of 32 sound human third molars with gingival margins located in either enamel or dentin/cementum. Restorations were performed as follows: G1 (control: composite, conventional light curing technique; G2: composite, soft-start technique; G3: amalgam/composite association (amalcomp; and G4: resin-modified glass ionomer cement/composite, open sandwich technique. The restored specimens were thermocycled. Epoxy resin replicas were made and coated for scanning electron microscopy examination. For microleakage evaluation, teeth were coated with nail polish and immersed in dye solution. Teeth were cut in 3 slices and dye penetration was recorded (mm, digitized and analyzed with Image Tool software. Microleakage data were analyzed statistically by non-parametric Kruskal-Wallis and Mann-Whitney tests. RESULTS: Leakage in enamel was lower than in dentin (p<0.001. G2 exhibited the lowest leakage values (p<0.05 in enamel margins, with no differences between the other groups. In dentin margins, groups G1 and G2 had similar behavior and both showed less leakage (p<0.05 than groups G3 and G4. SEM micrographs revealed different marginal adaptation patterns for the different techniques and for the different substrates. CONCLUSION: The soft-start technique showed no leakage in enamel margins and produced similar values to those of the conventional (control technique for dentin margins.
Lukenbach, M. C.; Hokanson, K. J.; Devito, K. J.; Kettridge, N.; Petrone, R. M.; Mendoza, C. A.; Granath, G.; Waddington, J. M.
2017-05-01
In the Boreal Plain of Canada, the margins of peatland ecosystems that regulate solute and nutrient fluxes between peatlands and adjacent mineral uplands are prone to deep peat burning. Whether post-fire carbon accumulation is able to offset large carbon losses associated with the deep burning at peatland margins is unknown. For this reason, we examined how post-fire hydrological conditions (i.e. water table depth and periodicity, soil tension, and surface moisture content) and depth of burn were associated with moss recolonization at the peatland margins of three sites. We then interpreted these findings using a hydrogeological systems approach, given the importance of groundwater in determining conditions in the soil-plant-atmosphere continuum in peatlands. Peatland margins dominated by local groundwater flow from adjacent peatland middles were characterized by dynamic hydrological conditions that, when coupled with lowered peatland margin surface elevations due to deep burning, produced two common hydrological states: 1) flooding during wet periods and 2) rapid water table declines during dry periods. These dynamic hydrological states were unfavorable to peatland moss recolonization and bryophytes typical of post-fire recovery in mineral uplands became established. In contrast, at a peatland margin where post-fire hydrological conditions were moderated by larger-scale groundwater flow, flooding and rapid water table declines were infrequent and, subsequently, greater peatland-dwelling moss recolonization was observed. We argue that peatland margins poorly connected to larger-scale groundwater flow are not only prone to deep burning but also lags in post-fire moss recovery. Consequently, an associated reduction in post-fire peat accumulation may occur and negatively affect the net carbon sink status and ecohydrological and biogeochemical function of these peatlands.
Target margins in radiotherapy of prostate cancer.
Yartsev, Slav; Bauman, Glenn
2016-11-01
We reviewed the literature on the use of margins in radiotherapy of patients with prostate cancer, focusing on different options for image guidance (IG) and technical issues. The search in PubMed database was limited to include studies that involved external beam radiotherapy of the intact prostate. Post-prostatectomy studies, brachytherapy and particle therapy were excluded. Each article was characterized according to the IG strategy used: positioning on external marks using room lasers, bone anatomy and soft tissue match, usage of fiducial markers, electromagnetic tracking and adapted delivery. A lack of uniformity in margin selection among institutions was evident from the review. In general, introduction of pre- and in-treatment IG was associated with smaller planning target volume (PTV) margins, but there was a lack of definitive experimental/clinical studies providing robust information on selection of exact PTV values. In addition, there is a lack of comparative research regarding the cost-benefit ratio of the different strategies: insertion of fiducial markers or electromagnetic transponders facilitates prostate gland localization but at a price of invasive procedure; frequent pre-treatment imaging increases patient in-room time, dose and labour; online plan adaptation should improve radiation delivery accuracy but requires fast and precise computation. Finally, optimal protocols for quality assurance procedures need to be established.
Ocean Margins Programs, Phase I research summaries
Verity, P. [ed.
1994-08-01
During FY 1992, the DOE restructured its regional coastal-ocean programs into a new Ocean Margins Program (OMP), to: Quantify the ecological and biogeochemical processes and mechanisms that affect the cycling, flux, and storage of carbon and other biogenic elements at the land/ocean interface; Define ocean-margin sources and sinks in global biogeochemical cycles, and; Determine whether continental shelves are quantitatively significant in removing carbon dioxide from the atmosphere and isolating it via burial in sediments or export to the interior ocean. Currently, the DOE Ocean Margins Program supports more than 70 principal and co-principal investigators, spanning more than 30 academic institutions. Research funded by the OMP amounted to about $6.9M in FY 1994. This document is a collection of abstracts summarizing the component projects of Phase I of the OMP. This phase included both research and technology development, and comprised projects of both two and three years duration. The attached abstracts describe the goals, methods, measurement scales, strengths and limitations, and status of each project, and level of support. Keywords are provided to index the various projects. The names, addresses, affiliations, and major areas of expertise of the investigators are provided in appendices.
Origin and dynamics of depositionary subduction margins
Vannucchi, Paola; Morgan, Jason P.; Silver, Eli; Kluesner, Jared
2016-01-01
Here we propose a new framework for forearc evolution that focuses on the potential feedbacks between subduction tectonics, sedimentation, and geomorphology that take place during an extreme event of subduction erosion. These feedbacks can lead to the creation of a “depositionary forearc,” a forearc structure that extends the traditional division of forearcs into accretionary or erosive subduction margins by demonstrating a mode of rapid basin accretion during an erosive event at a subduction margin. A depositionary mode of forearc evolution occurs when terrigenous sediments are deposited directly on the forearc while it is being removed from below by subduction erosion. In the most extreme case, an entire forearc can be removed by a single subduction erosion event followed by depositionary replacement without involving transfer of sediments from the incoming plate. We need to further recognize that subduction forearcs are often shaped by interactions between slow, long-term processes, and sudden extreme events reflecting the sudden influences of large-scale morphological variations in the incoming plate. Both types of processes contribute to the large-scale architecture of the forearc, with extreme events associated with a replacive depositionary mode that rapidly creates sections of a typical forearc margin. The persistent upward diversion of the megathrust is likely to affect its geometry, frictional nature, and hydrogeology. Therefore, the stresses along the fault and individual earthquake rupture characteristics are also expected to be more variable in these erosive systems than in systems with long-lived megathrust surfaces.
Origin and dynamics of depositionary subduction margins
Vannucchi, Paola; Morgan, Jason P.; Silver, Eli A.; Kluesner, Jared W.
2016-06-01
Here we propose a new framework for forearc evolution that focuses on the potential feedbacks between subduction tectonics, sedimentation, and geomorphology that take place during an extreme event of subduction erosion. These feedbacks can lead to the creation of a "depositionary forearc," a forearc structure that extends the traditional division of forearcs into accretionary or erosive subduction margins by demonstrating a mode of rapid basin accretion during an erosive event at a subduction margin. A depositionary mode of forearc evolution occurs when terrigenous sediments are deposited directly on the forearc while it is being removed from below by subduction erosion. In the most extreme case, an entire forearc can be removed by a single subduction erosion event followed by depositionary replacement without involving transfer of sediments from the incoming plate. We need to further recognize that subduction forearcs are often shaped by interactions between slow, long-term processes, and sudden extreme events reflecting the sudden influences of large-scale morphological variations in the incoming plate. Both types of processes contribute to the large-scale architecture of the forearc, with extreme events associated with a replacive depositionary mode that rapidly creates sections of a typical forearc margin. The persistent upward diversion of the megathrust is likely to affect its geometry, frictional nature, and hydrogeology. Therefore, the stresses along the fault and individual earthquake rupture characteristics are also expected to be more variable in these erosive systems than in systems with long-lived megathrust surfaces.
Marginal historiography: on Stekel's account of things.
Bos, Jaap
2005-01-01
Psychoanalytic historiography has been, and to a certain extent still is, written mainly from the victor's (Freud's) perspective. One of the first attempts to deliver an alternative account was published in 1926 by Wilhelm Stekel in a little-known paper entitled "On the History of the Analytical Movement," which he wrote in response to Freud's (1925) "An Autobiographical Study" as an attempt to supplement or even counter Freud's version. This paper offers a dialogical reading of Stekel's paper, focusing not on the question of whether or not Stekel was right, but on the problem of marginalization itself. What discursive processes contributed to the marginalization of Stekel's position, and in what sense could Stekel's paper be called an instance of self-marginalization? Analysing various intertextual links between Freud's and Stekel's accounts, the author finds that the two accounts were caught up in an antagonistic dialectic from which it was impossible to escape. Following this paper, an English translation of Stekel's 1926 account is presented here for the first time.
Testing for conditional multiple marginal independence.
Bilder, Christopher R; Loughin, Thomas M
2002-03-01
Survey respondents are often prompted to pick any number of responses from a set of possible responses. Categorical variables that summarize this kind of data are called pick any/c variables. Counts from surveys that contain a pick any/c variable along with a group variable (r levels) and stratification variable (q levels) can be marginally summarized into an r x c x q contingency table. A question that may naturally arise from this setup is to determine if the group and pick any/c variable are marginally independent given the stratification variable. A test for conditional multiple marginal independence (CMMI) can be used to answer this question. Since subjects may pick any number out of c possible responses, the Cochran (1954, Biometrics 10, 417-451) and Mantel and Haenszel (1959, Journal of the National Cancer Institute 22, 719-748) tests cannot be used directly because they assume that units in the contingency table are independent of each other. Therefore, new testing methods are developed. Cochran's test statistic is extended to r x 2 x q tables, and a modified version of this statistic is proposed to test CMMI. Its sampling distribution can be approximated through bootstrapping. Other CMMI testing methods discussed are bootstrap p-value combination methods and Bonferroni adjustments. Simulation findings suggest that the proposed bootstrap procedures and the Bonferroni adjustments consistently hold the correct size and provide power against various alternatives.
Joint modelling of annual maximum drought severity and corresponding duration
Tosunoglu, Fatih; Kisi, Ozgur
2016-12-01
In recent years, the joint distribution properties of drought characteristics (e.g. severity, duration and intensity) have been widely evaluated using copulas. However, history of copulas in modelling drought characteristics obtained from streamflow data is still short, especially in semi-arid regions, such as Turkey. In this study, unlike previous studies, drought events are characterized by annual maximum severity (AMS) and corresponding duration (CD) which are extracted from daily streamflow of the seven gauge stations located in Çoruh Basin, Turkey. On evaluation of the various univariate distributions, the Exponential, Weibull and Logistic distributions are identified as marginal distributions for the AMS and CD series. Archimedean copulas, namely Ali-Mikhail-Haq, Clayton, Frank and Gumbel-Hougaard, are then employed to model joint distribution of the AMS and CD series. With respect to the Anderson Darling and Cramér-von Mises statistical tests and the tail dependence assessment, Gumbel-Hougaard copula is identified as the most suitable model for joint modelling of the AMS and CD series at each station. Furthermore, the developed Gumbel-Hougaard copulas are used to derive the conditional and joint return periods of the AMS and CD series which can be useful for designing and management of reservoirs in the basin.
tmle : An R Package for Targeted Maximum Likelihood Estimation
Susan Gruber
2012-11-01
Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.
Stanton-Yonge, A.; Griffith, W. A.; Cembrano, J.; St. Julien, R.; Iturrieta, P.
2016-09-01
Obliquely convergent subduction margins develop trench-parallel faults shaping the regional architecture of orogenic belts and partitioning intraplate deformation. However, transverse faults also are common along most orogenic belts and have been largely neglected in slip partitioning analysis. Here we constrain the sense of slip and slip rates of differently oriented faults to assess whether and how transverse faults accommodate plate-margin slip arising from oblique subduction. We implement a forward 3-D boundary element method model of subduction at the Chilean margin evaluating the elastic response of intra-arc faults during different stages of the Andean subduction seismic cycle (SSC). Our model results show that the margin-parallel, NNE striking Liquiñe-Ofqui Fault System accommodates dextral-reverse slip during the interseismic period of the SSC, with oblique slip rates ranging between 1 and 7 mm/yr. NW striking faults exhibit sinistral-reverse slip during the interseismic phase of the SSC, displaying a maximum oblique slip of 1.4 mm/yr. ENE striking faults display dextral strike slip, with a slip rate of 0.85 mm/yr. During the SSC coseismic phase, all modeled faults switch their kinematics: NE striking fault become sinistral, whereas NW striking faults are normal dextral. Because coseismic tensile stress changes on NW faults reach 0.6 MPa at 10-15 km depth, it is likely that they can serve as transient magma pathways during this phase of the SSC. Our model challenges the existing paradigm wherein only margin-parallel faults account for slip partitioning: transverse faults are also capable of accommodating a significant amount of plate-boundary slip arising from oblique convergence.
Suitability of marginal biomass-derived biochars for soil amendment.
Buss, Wolfram; Graham, Margaret C; Shepherd, Jessica G; Mašek, Ondřej
2016-03-15
The term "marginal biomass" is used here to describe materials of little or no economic value, e.g. plants grown on contaminated land, food waste or demolition wood. In this study 10 marginal biomass-derived feedstocks were converted into 19 biochars at different highest treatment temperatures (HTT) using a continuous screw-pyrolysis unit. The aim was to investigate suitability of the resulting biochars for land application, judged on the basis of potentially toxic element (PTE) concentration, nutrient content and basic biochar properties (pH, EC, ash, fixed carbon). It was shown that under typical biochar production conditions the percentage content of several PTEs (As, Al, Zn) and nutrients (Ca, Mg) were reduced to some extent, but also that biochar can be contaminated by Cr and Ni during the pyrolysis process due to erosion of stainless steel reactor parts (average+82.8% Cr, +226.0% Ni). This can occur to such an extent that the resulting biochar is rendered unsuitable for soil application (maximum addition +22.5 mg Cr kg(-1) biochar and +44.4 mg Ni kg(-1) biochar). Biomass grown on land heavily contaminated with PTEs yielded biochars with PTE concentrations above recommended threshold values for soil amendments. Cd and Zn were of particular concern, exceeding the lowest threshold values by 31-fold and 7-fold respectively, despite some losses into the gas phase. However, thermal conversion of plants from less severely contaminated soils, demolition wood and food waste anaerobic digestate (AD) into biochar proved to be promising for land application. In particular, food waste AD biochar contained very high nutrient concentrations, making it interesting for use as fertiliser.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Suitability of marginal biomass-derived biochars for soil amendment
Buss, Wolfram [UK Biochar Research Centre, School of Geosciences, University of Edinburgh, Crew Building, Alexander Crum Brown Road, Edinburgh EH9 3FF (United Kingdom); Graham, Margaret C. [School of Geosciences, University of Edinburgh, Crew Building, Alexander Crum Brown Road, Edinburgh EH9 3FF (United Kingdom); Shepherd, Jessica G. [UK Biochar Research Centre, School of Geosciences, University of Edinburgh, Crew Building, Alexander Crum Brown Road, Edinburgh EH9 3FF (United Kingdom); School of Geosciences, University of Edinburgh, Crew Building, Alexander Crum Brown Road, Edinburgh EH9 3FF (United Kingdom); Mašek, Ondřej, E-mail: ondrej.masek@ed.ac.uk [UK Biochar Research Centre, School of Geosciences, University of Edinburgh, Crew Building, Alexander Crum Brown Road, Edinburgh EH9 3FF (United Kingdom)
2016-03-15
The term “marginal biomass” is used here to describe materials of little or no economic value, e.g. plants grown on contaminated land, food waste or demolition wood. In this study 10 marginal biomass-derived feedstocks were converted into 19 biochars at different highest treatment temperatures (HTT) using a continuous screw-pyrolysis unit. The aim was to investigate suitability of the resulting biochars for land application, judged on the basis of potentially toxic element (PTE) concentration, nutrient content and basic biochar properties (pH, EC, ash, fixed carbon). It was shown that under typical biochar production conditions the percentage content of several PTEs (As, Al, Zn) and nutrients (Ca, Mg) were reduced to some extent, but also that biochar can be contaminated by Cr and Ni during the pyrolysis process due to erosion of stainless steel reactor parts (average + 82.8% Cr, + 226.0% Ni). This can occur to such an extent that the resulting biochar is rendered unsuitable for soil application (maximum addition + 22.5 mg Cr kg{sup −1} biochar and + 44.4 mg Ni kg{sup −1} biochar). Biomass grown on land heavily contaminated with PTEs yielded biochars with PTE concentrations above recommended threshold values for soil amendments. Cd and Zn were of particular concern, exceeding the lowest threshold values by 31-fold and 7-fold respectively, despite some losses into the gas phase. However, thermal conversion of plants from less severely contaminated soils, demolition wood and food waste anaerobic digestate (AD) into biochar proved to be promising for land application. In particular, food waste AD biochar contained very high nutrient concentrations, making it interesting for use as fertiliser. - Highlights: • Marginal biomass feedstocks are materials of little economic value. • Biochar from biomass grown on PTE-rich soils tends to exceed guideline values. • Biochar from biomass with high mineral content can be a beneficial nutrient source. • Cr and Ni
Quevedo, Hernando
2012-01-01
A class of exact solutions of the Einstein-Maxwell equations is presented which contains infinite sets of gravitoelectric, gravitomagnetic and electromagnetic multipole moments. The multipolar structure of the solutions indicates that they can be used to describe the exterior gravitational field of an arbitrarily rotating mass distribution endowed with an electromagnetic field. The presence of gravitational multipoles completely changes the structure of the spacetime because of the appearance of naked singularities in a confined spatial region. The possibility of covering this region with interior solutions is analyzed in the case of a particular solution with quadrupole moment.
Intraoperative radiological margin assessment in breast-conserving surgery.
Ihrai, T; Quaranta, D; Fouche, Y; Machiavello, J-C; Raoust, I; Chapellier, C; Maestro, C; Marcy, M; Ferrero, J-M; Flipo, B
2014-04-01
A prospective study was lead in order to analyze the accuracy of an X-ray device settled in the operating room for margin assessment, when performing breast-conserving surgery. One hundred and seventy patients were included. All lesions were visible on the preoperative mammograms. An intraoperative X-ray of the lumpectomy specimen was systematically performed for margins assessment. Final histological data were collected and the accuracy of intraoperative specimen radiography (IOSR) for margin assessment was analyzed. IOSR allowed an evaluation of margins status in 155 cases (91.2%). After final histological examination, the positive margins rate would have been 6.5% if margin assessment had relied only on IOSR. Margin assessment with a two-dimensional X-ray device would have allowed the achievement of negative margins in 93.5% of the cases. Moreover, this procedure allows important time-saving and could have a substantial economical impact. Copyright © 2014 Elsevier Ltd. All rights reserved.
Intraoperative ultrasound control of surgical margins during partial nephrectomy
Feras M Alharbi
2016-01-01
Conclusions: The intraoperative US control of resection margins in PN is a simple, efficient, and effective method for ensuring negative surgical margins with a small increase in warm ischemia time and can be conducted by the operating urologist.
Combining experiments and simulations using the maximum entropy principle.
Wouter Boomsma
2014-02-01
Full Text Available A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
Combining experiments and simulations using the maximum entropy principle.
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-02-01
A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
The character of the glaciated Mid-Norwegian continental margin
Oline Hjelstuen, Berit; Haflidason, Haflidi; Petter Sejrup, Hans
2010-05-01
During Pleistocene the development of the NW European continental margin was strongly controlled by the variability in ocean circulation, glaciations and sea-level changes. Repeated occurrence of shelf edge glaciations, from Ireland to Svalbard, started at Marine Isotope Stage 12 (c. 0.5 Ma). During these periods, fast moving ice streams also crossed the Mid-Norwegian continental shelf on a number of locations, and a thick prograding wedge accumulated on the continental slope. During shelf edge glaciations and in early deglaciation phases high sedimentation rates (>2000 cm/ka) existed, and glacigenic debris flows and melt water plumes were deposited. Within these depositional environments we identify three slide events. These slides have affected an area between 2900 and 12000 km2 and involved 580-2400 km3 of sediments, noting that the slide debrites left by the failure events reach a maximum thickness of c. 150 m. The failures have occurred within an area dominated by gradients less than 1 degree, and observation of long run-out distances indicate that hydroplaning was important during slide development. Gas hydrate bearing sediments are identified on the mid-Norwegian continental margin, but appears to be absent in the slide scars. Thus, dissociation of gas hydrates may have promoted conditions for the failures to occur. Within the region of gas hydrate bearing Pleistocene sediments the Nyegga Pockmark Field is observed. This field contains more than 200 pockmarks and is located at a water depth of 600-800 m. The pockmarks identified are up to 15 m deep, between 30 m and 600 m across and reach a maximum area of c. 315 000 m2. The pockmarks are sediment-empty features and are restricted to a <16.2 cal ka BP old sandy mud unit. It seems that the Nyegga Pockmark Field does not show any strong relationship neither to seabed features, sub-seabed structures nor the glacial sedimentary setting. Thus, this implies a more complex development history for the Nyegga
Jiang Zhu
2014-01-01
Full Text Available Some delta-nabla type maximum principles for second-order dynamic equations on time scales are proved. By using these maximum principles, the uniqueness theorems of the solutions, the approximation theorems of the solutions, the existence theorem, and construction techniques of the lower and upper solutions for second-order linear and nonlinear initial value problems and boundary value problems on time scales are proved, the oscillation of second-order mixed delat-nabla differential equations is discussed and, some maximum principles for second order mixed forward and backward difference dynamic system are proved.
An Efficient Algorithm for Maximum-Entropy Extension of Block-Circulant Covariance Matrices
Carli, Francesca P; Pavon, Michele; Picci, Giorgio
2011-01-01
This paper deals with maximum entropy completion of partially specified block-circulant matrices. Since positive definite symmetric circulants happen to be covariance matrices of stationary periodic processes, in particular of stationary reciprocal processes, this problem has applications in signal processing, in particular to image modeling. Maximum entropy completion is strictly related to maximum likelihood estimation subject to certain conditional independence constraints. The maximum entropy completion problem for block-circulant matrices is a nonlinear problem which has recently been solved by the authors, although leaving open the problem of an efficient computation of the solution. The main contribution of this paper is to provide an efficient algorithm for computing the solution. Simulation shows that our iterative scheme outperforms various existing approaches, especially for large dimensional problems. A necessary and sufficient condition for the existence of a positive definite circulant completio...
Application of the maximum entropy method to profile analysis
Armstrong, N.; Kalceff, W. [University of Technology, Department of Applied Physics, Sydney, NSW (Australia); Cline, J.P. [National Institute of Standards and Technology, Gaithersburg, (United States)
1999-12-01
Full text: A maximum entropy (MaxEnt) method for analysing crystallite size- and strain-induced x-ray profile broadening is presented. This method treats the problems of determining the specimen profile, crystallite size distribution, and strain distribution in a general way by considering them as inverse problems. A common difficulty faced by many experimenters is their inability to determine a well-conditioned solution of the integral equation, which preserves the positivity of the profile or distribution. We show that the MaxEnt method overcomes this problem, while also enabling a priori information, in the form of a model, to be introduced into it. Additionally, we demonstrate that the method is fully quantitative, in that uncertainties in the solution profile or solution distribution can be determined and used in subsequent calculations, including mean particle sizes and rms strain. An outline of the MaxEnt method is presented for the specific problems of determining the specimen profile and crystallite or strain distributions for the correspondingly broadened profiles. This approach offers an alternative to standard methods such as those of Williamson-Hall and Warren-Averbach. An application of the MaxEnt method is demonstrated in the analysis of alumina size-broadened diffraction data (from NIST, Gaithersburg). It is used to determine the specimen profile and column-length distribution of the scattering domains. Finally, these results are compared with the corresponding Williamson-Hall and Warren-Averbach analyses. Copyright (1999) Australian X-ray Analytical Association Inc.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Gamma Radiolysis Studies of Aqueous Solution of Brilliant Green Dye
D. V. Parwate
2011-01-01
Full Text Available The effect of γ–radiation on colour intensity of aqueous solution of Brilliant Green has been investigated at two different concentrations. The degradation of Brilliant Green (BG has also been investigated in presence of suspended ZnO, by adding different amounts of ZnO. Simultaneously the conductance and pH of each solution system were measured before and after γ-irradiation. All the γ–irradiations were performed at a dose rate of 0.60 kGyhr-1 in GC-900. The maximum dose required for the complete degradation of the dye was found to be 0.39 kGy. G(-dye values were found to decrease with increase in gamma dose and were in the range 4.26 - 12.81. The conductance (7.6 - 25.3 μS and pH values increased marginally with dose for both the concentrations. The rate of decolouration was found to be high at lower doses and the efficiency of dye removal was higher at low concentration of the dye. This may be attributed to the presence of reaction by-products from the destruction of parent compound build up and compete for reaction intermediate species. The rate of reaction and rate constants were calculated and it was found that the degradation reaction follows first order kinetics. It was found that the decolouration percentage was more in dye systems in absence of ZnO.
The role of pressure anisotropy on the maximum mass of cold compact stars
Karmakar, S.; Mukherjee, S.; Sharma, R.; Maharaj, S. D.
2007-06-01
We study the physical features of a class of exact solutions for cold compact anisotropic stars. The effect of pressure anisotropy on the maximum mass and surface red-shift is analysed in the Vaidya--Tikekar model. It is shown that maximum compactness, red-shift and mass increase in the presence of anisotropic pressures; numerical values are generated which are in agreement with observation.
Maximum entropy method applied to deblurring images on a MasPar MP-1 computer
Bonavito, N. L.; Dorband, John; Busse, Tim
1991-01-01
A statistical inference method based on the principle of maximum entropy is developed for the purpose of enhancing and restoring satellite images. The proposed maximum entropy image restoration method is shown to overcome the difficulties associated with image restoration and provide the smoothest and most appropriate solution consistent with the measured data. An implementation of the method on the MP-1 computer is described, and results of tests on simulated data are presented.
The role of pressure anisotropy on the maximum mass of cold compact stars
S Karmakar; S Mukherjee; S Sharma; S D Maharaj
2007-06-01
We study the physical features of a class of exact solutions for cold compact anisotropic stars. The effect of pressure anisotropy on the maximum mass and surface red-shift is analysed in the Vaidya–Tikekar model. It is shown that maximum compactness, red-shift and mass increase in the presence of anisotropic pressures; numerical values are generated which are in agreement with observation.
Domoshnitsky Alexander
2009-01-01
Full Text Available We obtain the maximum principles for the first-order neutral functional differential equation where , and are linear continuous operators, and are positive operators, is the space of continuous functions, and is the space of essentially bounded functions defined on . New tests on positivity of the Cauchy function and its derivative are proposed. Results on existence and uniqueness of solutions for various boundary value problems are obtained on the basis of the maximum principles.
A maximum principle for diffusive Lotka-Volterra systems of two competing species
Chen, Chiun-Chuan; Hung, Li-Chang
2016-10-01
Using an elementary approach, we establish a new maximum principle for the diffusive Lotka-Volterra system of two competing species, which involves pointwise estimates of an elliptic equation consisting of the second derivative of one function, the first derivative of another function, and a quadratic nonlinearity. This maximum principle gives a priori estimates for the total mass of the two species. Moreover, applying it to the system of three competing species leads to a nonexistence theorem of traveling wave solutions.
Wakai, Nobuhide, E-mail: wakai@naramed-u.ac.jp [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871, Japan and Department of Radiation Oncology, Nara Medical University, Kashihara, Nara 634-8522 (Japan); Sumida, Iori; Otani, Yuki; Suzuki, Osamu; Seo, Yuji; Isohashi, Fumiaki; Yoshioka, Yasuo; Ogawa, Kazuhiko [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Hasegawa, Masatoshi [Department of Radiation Oncology, Nara Medical University, Kashihara, Nara 634-8522 (Japan)
2015-05-15
Purpose: The authors sought to determine the optimal collimator leaf margins which minimize normal tissue dose while achieving high conformity and to evaluate differences between the use of a flattening filter-free (FFF) beam and a flattening-filtered (FF) beam. Methods: Sixteen lung cancer patients scheduled for stereotactic body radiotherapy underwent treatment planning for a 7 MV FFF and a 6 MV FF beams to the planning target volume (PTV) with a range of leaf margins (−3 to 3 mm). Forty grays per four fractions were prescribed as a PTV D95. For PTV, the heterogeneity index (HI), conformity index, modified gradient index (GI), defined as the 50% isodose volume divided by target volume, maximum dose (Dmax), and mean dose (Dmean) were calculated. Mean lung dose (MLD), V20 Gy, and V5 Gy for the lung (defined as the volumes of lung receiving at least 20 and 5 Gy), mean heart dose, and Dmax to the spinal cord were measured as doses to organs at risk (OARs). Paired t-tests were used for statistical analysis. Results: HI was inversely related to changes in leaf margin. Conformity index and modified GI initially decreased as leaf margin width increased. After reaching a minimum, the two values then increased as leaf margin increased (“V” shape). The optimal leaf margins for conformity index and modified GI were −1.1 ± 0.3 mm (mean ± 1 SD) and −0.2 ± 0.9 mm, respectively, for 7 MV FFF compared to −1.0 ± 0.4 and −0.3 ± 0.9 mm, respectively, for 6 MV FF. Dmax and Dmean for 7 MV FFF were higher than those for 6 MV FF by 3.6% and 1.7%, respectively. There was a positive correlation between the ratios of HI, Dmax, and Dmean for 7 MV FFF to those for 6 MV FF and PTV size (R = 0.767, 0.809, and 0.643, respectively). The differences in MLD, V20 Gy, and V5 Gy for lung between FFF and FF beams were negligible. The optimal leaf margins for MLD, V20 Gy, and V5 Gy for lung were −0.9 ± 0.6, −1.1 ± 0.8, and −2.1 ± 1.2 mm, respectively, for 7 MV FFF compared
Beige, Joachim; Lutter, Steffen; Martus, Peter
2012-06-01
BACKGROUND.: Dialysis bath production, at least in Europe, is currently based on pre-produced aqueous solutions of dialysis salts (concentrate), which are re-handled by dialysis machines to deliver the final dialysate concentrations. Because of the logistics of aqueous solution creation, a large amount of transportation capacity is needed. Therefore, we changed this process to use pre-produced dry salt containers and to undertake in-clinic dissolution of salts and concentration production. Because no preclinical control for solute concentrations is available so far using this new process, we employed routine clinical chemistry analytics. METHODS.: We report the controls of solute concentrations created by these methods for 746 samples of concentrates and 151 dissolution processes. For analysis, absolute and relative deviations from prescriptions and associations between the solute concentrations and the density controls of the concentrates were computed. RESULTS.: A total of 98% of all the concentrates were found to be within a 10% margin of error from the prescriptions. The mean relative deviation of the solute concentrations from the prescriptions was -0.635 ± 3.83%. Among particular solutes, sodium had the highest maximum deviation of 26 mmol/L from the prescription. Calcium and magnesium (small concentration solutes) exhibited small systematic errors of 1.37 and 1.22%, respectively. Other solute concentrations showed random errors only and no associations with the mean relative deviations of all the solutes within a production batch or with the density controls. CONCLUSIONS.: Single solute concentration control by routine clinical chemistry after dry salt production of concentrates is a valuable additional tool for monitoring clinical risk with dialysate concentrates. The analytical random error of clinical chemistry exceeds the weight tolerance of production; therefore, such analytics cannot be used for precision production and control of dry salt containers.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Schuster, Tibor; Pang, Menglan; Platt, Robert W
2015-09-01
The high-dimensional propensity score algorithm attempts to improve control of confounding in typical treatment effect studies in pharmacoepidemiology and is increasingly being used for the analysis of large administrative databases. Within this multi-step variable selection algorithm, the marginal prevalence of non-zero covariate values is considered to be an indicator for a count variable's potential confounding impact. We investigate the role of the marginal prevalence of confounder variables on potentially caused bias magnitudes when estimating risk ratios in point exposure studies with binary outcomes. We apply the law of total probability in conjunction with an established bias formula to derive and illustrate relative bias boundaries with respect to marginal confounder prevalence. We show that maximum possible bias magnitudes can occur at any marginal prevalence level of a binary confounder variable. In particular, we demonstrate that, in case of rare or very common exposures, low and high prevalent confounder variables can still have large confounding impact on estimated risk ratios. Covariate pre-selection by prevalence may lead to sub-optimal confounder sampling within the high-dimensional propensity score algorithm. While we believe that the high-dimensional propensity score has important benefits in large-scale pharmacoepidemiologic studies, we recommend omitting the prevalence-based empirical identification of candidate covariates. Copyright © 2015 John Wiley & Sons, Ltd.
Effect of postponed polishing on marginal adaptation of resin used with dentin-bonding agent.
Hansen, E K; Asmussen, E
1988-06-01
Dentin cavities, prepared in extracted human teeth, were treated with two different dentin-bonding agents and filled with a light-activated microfilled resin. The maximum width of the contraction gap (MG) and the extent of the gap (GP) were then measured, using a light microscope, approximately 0.1 mm below the original free surface of the filling. The contraction gap was measured 30 s, 10 min or 60 min, and remeasured 65 min, after stop of irradiation. A positive correlation was found between the two variables, MG and GP. The product of MG and GP was chosen as basis for the statistical analyses. This "marginal index" was significantly reduced when polishing of the marginal area was postponed for 10 min with one of the dentin-bonding agents and for 60 min with the other. Even though the improvement in marginal adaptation was statistically significant, this improvement was considered clinically irrelevant. It is concluded that polishing of the marginal area should not be done before the hygroscopic expansion of the resin restoration has closed the contraction gap.
Evaluation of carbon dioxide laser therapy for benign tumors of the eyelid margin.
Rentka, Aniko; Grygar, Jan; Nemes, Zoltan; Kemeny-Beke, Adam
2017-09-02
Eyelid margin tumors require special attention based on both anatomical and histological perspectives. Our aim in this study was to evaluate carbon dioxide (CO2) laser therapy for the treatment of eyelid margin tumors. Fifty-two patients with 55 eyelid margin tumors were included in this study. All tumors were removed with a CO2 laser, and histopathological evaluation was obtained in 52 cases. All patients were followed up for a mean period of 8.5 months (range 6 to 14 months). There were no bleedings in the intra- and postoperative period; the wounds were dry and reepithelized after 10-14 days and no recurrence occurred during follow-up period. Compared to the surrounding tissue, the treated area was hypopigmented and maximum five eyelashes (average 2.5) were wasted during the procedure. We achieved complete patient and surgeon satisfaction with cosmetic and therapeutic results. CO2 laser treatment of eyelid margin is a safe and effective procedure; its cosmetic result is beneficial as it does not cause malposition of the eyelid or damage to the lacrimal drainage system if the tumor is located in its proximity.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Marginal phase correction of truncated Bessel beams
Sedukhin
2000-06-01
Approximate analytic expressions are obtained for evaluating the axial intensity and the central-lobe diameter of J0 Bessel beams transmitted through a finite-aperture phase filter. A reasonable quality factor governing the axial-intensity behavior of a phase-undistorted truncated Bessel beam is found to be the inverse square root of the Fresnel number defined, for a given aperture, from the axial point of geometrical shadow. Additional drastic reduction of axial-intensity oscillations is accomplished by using marginal phase correction of the beam instead of the well-known amplitude apodization. A procedure for analytically calculating an optimal monotonic slowly varying correction phase function is described.
Potentials of marginal lands - sponateous ecosystem development
Gerwin, Werner; Schaaf, Wolfgang
2017-04-01
Marginal lands are often considered as unfertile and not productive. They are widely excluded from modern land use by conventional agriculture. Assessment of soil fertility usually shows very low productivity potentials at least for growing traditional crops. However, it can be frequently observed that natural succession at different types of marginal lands leads to very diverse and nonetheless productive ecosystems. Examples can be found at abandoned former industrial or transportation sites which were set aside and not further maintained - and also in post-mining landscapes. In one of the lignite open cast mines of the State of Brandenburg in Eastern Germany a landscape observatory was established in 2005 for observing this natural ecosystem development under marginal site conditions. The site of 6 ha is part of the post-mining landscapes of Lusatia which are often characterized by poor soil conditions and clearly reduced soil fertility. It is named "Hühnerwasser-Quellgebiet" (Chicken Creek Catchment) after a small stream that is restored again after destruction by the mining operations. It is planned to serve as the headwater of this stream and was left to an unrestricted primary succession. A comprehensive scientific monitoring program is carried out since the start of ecosystem development in 2005. The results offer exemplary insights into the establishment of interaction networks between the developing ecosystem compartments. After 10 years a large biodiversity, expressed by a high number of species, can be found at this site as the result of natural recovery processes. A large number of both tree species and individuals have settled here. Even if no economic use of the site and of the woody biomass produced by these trees is planned, an overall assessment of the biomass production was carried out. The results showed that the biomass production from natural succession without any application of fertilizers etc. is directly comparable with yields from
Area inequalities for stable marginally trapped surfaces
Jaramillo, José Luis
2012-01-01
We discuss a family of inequalities involving the area, angular momentum and charges of stably outermost marginally trapped surfaces in generic non-vacuum dynamical spacetimes, with non-negative cosmological constant and matter sources satisfying the dominant energy condition. These inequalities provide lower bounds for the area of spatial sections of dynamical trapping horizons, namely hypersurfaces offering quasi-local models of black hole horizons. In particular, these inequalities represent particular examples of the extension to a Lorentzian setting of tools employed in the discussion of minimal surfaces in Riemannian contexts.
Formation and evolution of magma-poor margins, an example of the West Iberia margin
Perez-Gussinye, Marta; Andres-Martinez, Miguel; Morgan, Jason P.; Ranero, Cesar R.; Reston, Tim
2016-04-01
The West Iberia-Newfoundland (WIM-NF) conjugate margins have been geophysically and geologically surveyed for the last 30 years and have arguably become a paradigm for magma-poor extensional margins. Here we present a coherent picture of the WIM-NF rift to drift evolution that emerges from these observations and numerical modeling, and point out important differences that may exist with other magma-poor margins world-wide. The WIM-NF is characterized by a continental crust that thins asymmetrically and a wide and symmetric continent-ocean transition (COT) interpreted to consist of exhumed and serpentinised mantle with magmatic products increasing oceanward. The architectural evolution of these margins is mainly dominated by cooling under very slow extension velocities (crust that most probably was not extremely weak at the start of rifting. These conditions lead to a system where initially deformation is distributed over a broad area and the upper, lower crust and lithosphere are decoupled. As extension progresses upper, lower, crust and mantle become tightly coupled and deformation localizes due to strengthening and cooling during rifting. Coupling leads to asymmetric asthenospheric uplift and weakening of the hanginwall of the active fault, where a new fault forms. This continued process leads to the formation of an array of sequential faults that dip and become younger oceanward. Here we show that these processes acting in concert: 1) reproduce the margin asymmetry observed at the WIM-NF, 2) explain the fault geometry evolution from planar, to listric to detachment like by having one common Andersonian framework, 3) lead to the symmetric exhumation of mantle with little magmatism, and 4) explain the younging of the syn-rift towards the basin centre and imply that unconformities separating syn- and post-rift may be diachronous and younger towards the ocean. Finally, we show that different lower crustal rheologies lead to different patterns of extension and to an
Approximating the maximum weight clique using replicator dynamics.
Bomze, I R; Pelillo, M; Stix, V
2000-01-01
Given an undirected graph with weights on the vertices, the maximum weight clique problem (MWCP) is to find a subset of mutually adjacent vertices (i.e., a clique) having the largest total weight. This is a generalization of the classical problem of finding the maximum cardinality clique of an unweighted graph, which arises as a special case of the MWCP when all the weights associated to the vertices are equal. The problem is known to be NP-hard for arbitrary graphs and, according to recent theoretical results, so is the problem of approximating it within a constant factor. Although there has recently been much interest around neural-network algorithms for the unweighted maximum clique problem, no effort has been directed so far toward its weighted counterpart. In this paper, we present a parallel, distributed heuristic for approximating the MWCP based on dynamics principles developed and studied in various branches of mathematical biology. The proposed framework centers around a recently introduced continuous characterization of the MWCP which generalizes an earlier remarkable result by Motzkin and Straus. This allows us to formulate the MWCP (a purely combinatorial problem) in terms of a continuous quadratic programming problem. One drawback associated with this formulation, however, is the presence of "spurious" solutions, and we present characterizations of these solutions. To avoid them we introduce a new regularized continuous formulation of the MWCP inspired by previous works on the unweighted problem, and show how this approach completely solves the problem. The continuous formulation of the MWCP naturally maps onto a parallel, distributed computational network whose dynamical behavior is governed by the so-called replicator equations. These are dynamical systems introduced in evolutionary game theory and population genetics to model evolutionary processes on a macroscopic scale.We present theoretical results which guarantee that the solutions provided by
Recidiva tardia de linfoma da zona marginal Late relapse of marginal zone lymphoma
Rocha,Talita M. B.S.; Bortolheiro,Tereza C.; Eduardo Costa; Daniela Haardt; Roberto P. Paes; Chiattone, Carlos S.
2009-01-01
O linfoma de zona marginal é um linfoma de baixo grau com curso clínico indolente e potencial de recidiva.1,2 Apresentamos um caso de recidiva tardia após 25 anos de aparente remissão completa, levantando a possibilidade de recidiva de doença preexistente ou desenvolvimento de novo clone neoplásico.Marginal zone lymphoma is a low grade lymphoma with an indolent course and chance to relapse. We present a case of a patient who suffered relapse after 25 years of apparently complete remission of ...
Seitz, M.G.
1982-01-01
Reviewed in this statement are methods of preparing solutions to be used in laboratory experiments to examine technical issues related to the safe disposal of nuclear waste from power generation. Each approach currently used to prepare solutions has advantages and any one approach may be preferred over the others in particular situations, depending upon the goals of the experimental program. These advantages are highlighted herein for three approaches to solution preparation that are currently used most in studies of nuclear waste disposal. Discussion of the disadvantages of each approach is presented to help a user select a preparation method for his particular studies. Also presented in this statement are general observations regarding solution preparation. These observations are used as examples of the types of concerns that need to be addressed regarding solution preparation. As shown by these examples, prior to experimentation or chemical analyses, laboratory techniques based on scientific knowledge of solutions can be applied to solutions, often resulting in great improvement in the usefulness of results.
Jin, L; Wang, L; Li, J; Luo, W; Feigenberg, S J; Ma, C-M [Department of Radiation Oncology, Fox Chase Cancer Center, Philadelphia, PA 19111 (United States)
2007-07-21
This work investigated the selection of beam margins in lung-cancer stereotactic body radiotherapy (SBRT) with 6 MV photon beams. Monte Carlo dose calculations were used to systematically and quantitatively study the dosimetric effects of beam margins for different lung densities (0.1, 0.15, 0.25, 0.35 and 0.5 g cm{sup -3}), planning target volumes (PTVs) (14.4, 22.1 and 55.3 cm{sup 3}) and numbers of beam angles (three, six and seven) in lung-cancer SBRT in order to search for optimal beam margins for various clinical situations. First, a large number of treatment plans were generated in a commercial treatment planning system, and then recalculated using Monte Carlo simulations. All the plans were normalized to ensure that 95% of the PTV at least receives the prescription dose and compared quantitatively. Based on these plans, the relationships between the beam margin and quantities such as the lung toxicity (quantified by V{sub 20}, the percentage volume of the two lungs receiving at least 20 Gy) and the maximum target (PTV) dose were established for different PTVs and lung densities. The impact of the number of beam angles on the relationship between V{sub 20} and the beam margin was assessed. Quantitative information about optimal beam margins for lung-cancer SBRT was obtained for clinical applications.
ZHANG Hong-lie; ZHANG Guo-yin; YAO Ai-hong
2010-01-01
This paper presents an algorithm that combines the chaos optimization algorithm with the maximum entropy(COA-ME)by using entropy model based on chaos algorithm,in which the maximum entropy is used as the second method of searching the excellent solution.The search direction is improved by chaos optimization algorithm and realizes the selective acceptance of wrong solution.The experimental result shows that the presented algorithm can be used in the partitioning of hardware/software of reconfigurable system.It effectively reduces the local extremum problem,and search speed as well as performance of partitioning is improved.
Characterizing entanglement with global and marginal entropic measures
Adesso, G; De Siena, S; Adesso, Gerardo; Illuminati, Fabrizio; Siena, Silvio De
2003-01-01
We qualify the entanglement of arbitrary mixed states of bipartite quantum systems by comparing global and marginal mixednesses quantified by different entropic measures. For systems of two qubits we discriminate the class of maximally entangled states with fixed marginal mixednesses, and determine an analytical upper bound relating the entanglement of formation to the marginal linear entropies. This result partially generalizes to mixed states the quantification of entaglement with marginal mixednesses holding for pure states. We identify a class of entangled states that, for fixed marginals, are globally more mixed than product states when measured by the linear entropy. Such states cannot be discriminated by the majorization criterion.
Cetin, Bilge Kartal; Prasad, Neeli R.; Prasad, Ramjee
2011-01-01
of the maximum lifetime routing problem that considers the operation modes of the node. Solution of the linear programming gives the upper analytical bound for the network lifetime. In order to illustrate teh application of the optimization model, we solved teh problem for different parameter settings...... protocols, and the energy model for transmission. In this paper, we tackle the routing challenge for maximum lifetime of the sensor network. We introduce a novel linear programming approach to the maximum lifetime routing problem. To the best of our knowledge, this is the first mathematical programming...
An Improved Maximum C/I Scheduling Algorithm Combined with HARQ
无
2003-01-01
It is well known that traffic in downlink will be much greater than that in uplink in 3 G and that beyond. High Speed Downlink Packet Access(HSDPA) is the solution to transmission for high-speed downlink packet service in UMTS, of which Maximum C/I scheduling is one of the important algorithms related to performance enhancement. An improved scheme, Thorough Maximum C/I scheduling algorithm, is presented in this article, in which every transmitted frame has the maximum C/I. The simulation results show that the new Maximum C/I scheme outperforms the conventional scheme in throughput performance and delay performance, and that the FER decreases faster as the maximum number of the retransmission increases.
A study on the growth curve of and maximum profit from layer-type cockerel chicks.
Gang, F Y; Zhen, Y S
1997-09-01
1. 2900 commercial layer-type cockerel chicks were reared on the floor from 1-day-old to 9 weeks of age. 2. The growth curve of the cockerel chicks was [formula see text] 3. The feeding costs (US$) of layer-type cockerel chicks were described by the equation Y = a + bx + cx2 = 0.0657 - 0.0091x + 0.0069x2. 4. When the layer-type cockerel chicks' marketing price was US$0.82 per kg. (6.8 Renminbi per kg), the optimum marketing age for maximum profit margin was 5.9 weeks (41 to 42 d).
Maximum Likelihood Combining of Stochastic Maps
2011-09-01
M. Csobra, “A solution to the simultaneous localisation and mapping (SLAM) problem,” IEEE Transactions on Robotics and Automation, Vol. 17, No. 3, pp... IEEE Transactions on Robotics and Automation, Vol. 17, No. 6, pp. 890–897, 2001. [7] Y. Bar-Shalom and T. Fortman, Tracking and data association
M Ehrmantraut Nogales
2011-12-01
Full Text Available Propósito: El propósito de esta investigación fue realizar un estudio in vitro del sellado marginal de 80 incrustaciones de resina compuesta cementadas con resina compuesta fluida utilizando un sistema adhesivo autograbante (Go!, SDI, Australia y un sistema adhesivo con grabado ácido total (Stae, SDI, Australia. Método: Las piezas dentarias restauradas fueron puestas en una estufa a 37ºC y 100% de humedad relativa durante 48 horas para simular las condiciones bucales. Luego fueron sometidas a termociclado, en una solución de azul de metileno al 1%. Posteriormente las muestras fueron cortadas en sentido vestíbulo lingual o palatino, para ser observadas bajo un microscopio óptico, para evaluar la interfase diente restauración midiendo los porcentajes de filtración para ambos grupos. Resultados: Los resultados fueron analizados estadísticamente mediante el t-test de Student obteniéndose diferencias significativas entre los dos grupos estudiados. Conclusión: Todos los cuerpos de prueba presentaron algún grado de filtración marginal, sin embargo el grupo que utilizó un sistema adhesivo autograbante demostró tener valores significativamente mayores de filtración que el grupo que utilizó el sistema convencional.Aim: The purpose of this research was to study the marginal sealing in 80 composite resin inlay, bonded with flow composite resin using a Self-etch fluid (Go!, SDI, Australia versus a total-etching system (Stae, SDI, Australia. Method: The samples were placed in an oven at 37ºC and 100% humidity for 48 hours, after which the samples were thermocycled in a methylene blue 1% solution. This cycle was repeated 80 times. The samples were cut transversally, the restorations were observed trough an optical microscope to calculate the percentage of filtration in relation to the total length of the cavity to the axial wall. Results: The results were statistically analyzed by Student t-test. And there were significant differences in
Research priorities for zoonoses and marginalized infections.
2012-01-01
This report provides a review and analysis of the research landscape for zoonoses and marginalized infections which affect poor populations, and a list of research priorities to support disease control. The work is the output of the Disease Reference Group on Zoonoses and Marginalized Infectious Diseases of Poverty (DRG6), which is part of an independent think tank of international experts, established and funded by the Special Programme for Research and Training in Tropical Diseases (TDR), to identify key research priorities through review of research evidence and input from stakeholder consultations. The report covers a diverse range of diseases, including zoonotic helminth, protozoan, viral and bacterial infections considered to be neglected and associated with poverty. Disease-specific research issues are elaborated under individual disease sections and many common priorities are identified among the diseases such as the need for new and/or improved drugs and regimens, diagnostics and, where appropriate, vaccines. The disease-specific priorities are described as micro priorities compared with the macro level priorities which will drive policy-making for: improved surveillance; interaction between the health, livestock, agriculture, natural resources and wildlife sectors in tackling zoonotic diseases; and true assessment of the burden of zoonoses. This is one often disease and thematic reference group reports that have come out of the TDR Think Tank, all of which have contributed to the development of the Global Report search on Infectious Diseases of Poverty, available at: w.who.int/tdr/stewardship/global_report/en/index.html.
Instability and Tsunamigenic Potential at Convergent Margins
von Huene, R.; Ranero, C. R.; Watts, P.
2001-12-01
Along many convergent margins multibeam echosounding navigated with GPS has revealed large slope failures that were probably tsunamigenic. Bathymetric data combined with seismic reflection imaging indicate multiple causes. The 55-km wide Nicoya Slump resulted from the steepening slope above an underthrusting seamount on the subducting oceanic plate. This slump may have generated a 27-m high wave. Several 5-7 km wide mid-slope slides off central Nicaragua probably resulted from steepening of the continental slope by tectonic erosion. They may have generated waves 6-7 m high. A 30 km wide mid-slope slump off northern Peru may have generated a 5 m high wave. Its cause will not be understood without better seismic reflection imaging but considerable fluid venting was observed across its headwall. In the Gulf of Alaska a large slide appears to have resulted from rapid sedimentation. Tsunamigenic slope failure along convergent margins is only beginning to be resolved and the causes vary. Subducted ocean floor relief, tectonically steepened slopes, and sites of rapid sedimentation can help target potential failure and possible future tsunami hazards.
Large margin classifier-based ensemble tracking
Wang, Yuru; Liu, Qiaoyuan; Yin, Minghao; Wang, ShengSheng
2016-07-01
In recent years, many studies consider visual tracking as a two-class classification problem. The key problem is to construct a classifier with sufficient accuracy in distinguishing the target from its background and sufficient generalize ability in handling new frames. However, the variable tracking conditions challenges the existing methods. The difficulty mainly comes from the confused boundary between the foreground and background. This paper handles this difficulty by generalizing the classifier's learning step. By introducing the distribution data of samples, the classifier learns more essential characteristics in discriminating the two classes. Specifically, the samples are represented in a multiscale visual model. For features with different scales, several large margin distribution machine (LDMs) with adaptive kernels are combined in a Baysian way as a strong classifier. Where, in order to improve the accuracy and generalization ability, not only the margin distance but also the sample distribution is optimized in the learning step. Comprehensive experiments are performed on several challenging video sequences, through parameter analysis and field comparison, the proposed LDM combined ensemble tracker is demonstrated to perform with sufficient accuracy and generalize ability in handling various typical tracking difficulties.
Limit theorems for functions of marginal quantiles
Babu, G Jogesh; Choi, Kwok Pui; Mangalam, Vasudevan; 10.3150/10-BEJ287
2011-01-01
Multivariate distributions are explored using the joint distributions of marginal sample quantiles. Limit theory for the mean of a function of order statistics is presented. The results include a multivariate central limit theorem and a strong law of large numbers. A result similar to Bahadur's representation of quantiles is established for the mean of a function of the marginal quantiles. In particular, it is shown that \\[\\sqrt{n}\\Biggl(\\frac{1}{n}\\sum_{i=1}^n\\phi\\bigl(X_{n:i}^{(1)},...,X_{n:i}^{(d)}\\bigr)-\\bar{\\gamma}\\Biggr)=\\frac{1}{\\sqrt{n}}\\sum_{i=1}^nZ_{n,i}+\\mathrm{o}_P(1)\\] as $n\\rightarrow\\infty$, where $\\bar{\\gamma}$ is a constant and $Z_{n,i}$ are i.i.d. random variables for each $n$. This leads to the central limit theorem. Weak convergence to a Gaussian process using equicontinuity of functions is indicated. The results are established under very general conditions. These conditions are shown to be satisfied in many commonly occurring situations.
HISTOPATHOLOGY OF MARGINAL SUPERFICIAL PERIODONTIUM AT MENOPAUSE
A. Georgescu
2012-03-01
Full Text Available Premises: Sexual hormones may affect the general health condition of women, as early as puberty, continuing during pregnancy and also after menopause. Variations of the hormonal levels may cause different – either local or general – pathological modifications. Sexual hormones may also affect periodontal status, favourizing gingival inflammations and reducing periodontal resistance to the action of the bacterial plaque. Scope: Establishment of the correlations between the debut or the manifestation of menopause and the modifications produced in the superficial periodontium. Materials and method: Clinical and paraclinical investigations were performed on female patients with ages between 45 and 66 years, involving macroscopic, microscopic and radiological recording of the aspect of the superificial periodontium (gingiva. Results: Analysis of the histological sections evidenced atrophic and involutive modifications in the marginal superficial periodontium of female patients at menopause. Conclusions: Sexual hormones intervene in the histological equilibrium of the marginal superficial periodontium, influencing the periodontal health status, which explains the correlation between the subjective symptomatology specific to menopause and the histopatological aspect at epithelial level.
Naturally light dilatons from nearly marginal deformations
Megias, Eugenio
2014-01-01
We discuss the presence of a light dilaton in CFTs deformed by a nearly-marginal operator O, in the holographic realizations consisting of confining RG flows that end on a soft wall. Generically, the deformations induce a condensate , and the dilaton mode can be identified as the fluctuation of . We obtain a mass formula for the dilaton as a certain average along the RG flow. The dilaton is naturally light whenever i) confinement is reached fast enough (such as via the condensation of O) and ii) the beta function is small (walking) at the condensation scale. These conditions are satisfied for a class of models with a bulk pseudo-Goldstone boson whose potential is nearly flat at small field and exponential at large field values. Thus, the recent observation by Contino, Pomarol and Ratazzi holds in CFTs with a single nearly-marginal operator. We also discuss the holographic method to compute the condensate , based on solving the first-order nonlinear differential equation that the beta function satisfies.
Photogrammetric monitoring of glacier margin lakes
Christian Mulsow
2015-07-01
Full Text Available The growing number of glacier margin lakes that have developed due to glacier retreat have caused an increase of dangerous glacier lake outburst floods (GLOFs in several regions over the last decade. This normally causes a flood wave downstream the glacier. Typically, such an event takes few to several hours. GLOF scenarios may be a significant hazard to life, property, nature and infrastructure in the affected areas. A GLOF is usually characterized by a progressive water level drop. By observing the water level of the lake, an imminent GLOF-event can be identified. Common gauging systems are often not suitable for the measurement task, as they may be affected by ice fall or landslides in the lake basin. Therefore, in our pilot study, the water level is observed by processing images of a terrestrial camera system observing a glacier margin lake. The paper presents the basic principle of an automatic single-camera-based GLOF early warning system. Challenges and approaches to solve them are discussed. First, results from processed image sequences are presented to show the feasibility of the concept. Water level changes can be determined at decimetre precision.
PGSFR Core Thermal Design Procedure to Evaluate the Safety Margin
Choi, Sun Rock; Kim, Sang-Ji [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
The Korea Atomic Energy Research Institute (KAERI) has performed a SFR design with the final goal of constructing a prototype plant by 2028. The main objective of the SFR prototype plant is to verify the TRU metal fuel performance, reactor operation, and transmutation ability of high-level wastes. The core thermal design is to ensure the safe fuel performance during the whole plant operation. Compared to the critical heat flux in typical light water reactors, nuclear fuel damage in SFR subassemblies arises from a creep induced failure. The creep limit is evaluated based on the maximum cladding temperature, power, neutron flux, and uncertainties in the design parameters, as shown in Fig. 1. In this work, the core thermal design procedures are compared to verify the present PGSFR methodology based on the nuclear plant design criteria/guidelines and previous SFR thermal design methods. The PGSFR core thermal design procedure is verified based on the nuclear plant design criteria/guidelines and previous methods in LWRs and SFRs. The present method aims to directly evaluate the fuel cladding failure and to assure more safety margin. The 2 uncertainty is similar to 95% one-side tolerance limit of 1.96 in LWRs. The HCFs, ITDP, and MCM reveal similar uncertainty propagation for cladding midwall temperature for typical SFR conditions. The present HCFs are mainly employed from the CRBR except the fuel-related uncertainty such as an incorrect fuel distribution. Preliminary PGSFR specific HCFs will be developed by the end of 2015.
An approximate, maximum-terminal-velocity descent to a point
Eisler, G. Richard; Hull, David G.
A neighboring extremal control problem is formulated for a hypersonic glider to execute a maximum-terminal-velocity descent to a stationary target in a vertical plane. The resulting two-part, feedback control scheme initially solves a nonlinear algebraic problem to generate a nominal trajectory to the target altitude. Secondly, quadrature about the nominal provides the lift perturbation necessary to achieve the target downrange. On-line feedback simulations are run for the proposed scheme and a form of proportional navigation and compared with an off-line parameter optimization method. The neighboring extremal terminal velocity compares very well with the parameter optimization solution and is far superior to proportional navigation. However, the update rate is degraded, though the proposed method can be executed in real time.
Delocalized Epidemics on Graphs: A Maximum Entropy Approach
Sahneh, Faryad Darabi; Scoglio, Caterina
2016-01-01
The susceptible--infected--susceptible (SIS) epidemic process on complex networks can show metastability, resembling an endemic equilibrium. In a general setting, the metastable state may involve a large portion of the network, or it can be localized on small subgraphs of the contact network. Localized infections are not interesting because a true outbreak concerns network--wide invasion of the contact graph rather than localized infection of certain sites within the contact network. Existing approaches to localization phenomenon suffer from a major drawback: they fully rely on the steady--state solution of mean--field approximate models in the neighborhood of their phase transition point, where their approximation accuracy is worst; as statistical physics tells us. We propose a dispersion entropy measure that quantifies the localization of infections in a generic contact graph. Formulating a maximum entropy problem, we find an upper bound for the dispersion entropy of the possible metastable state in the exa...
Printing Detecting Algorithm Basing on Maximum Degree of Recognition
Hu Zhang
2013-04-01
Full Text Available In modern packaging, printing industry, due to effects of the properties of the strip itself and the ambient light, strip background color and the color of the printing line, the low contrast boundaries of the strip on both sides and so on, the traditional digital qualitative detection and control to the correction system does not meet the comprehensive requirements. This paper aims to study the detection of a continuous line, discontinuous line and color dividing line on the strip, and because of low contrast between background color and dividing line, we proposed an innovative solution and implementation. This article discusses a new algorithm basing on maximum degree of recognition and optimal light source search algorithm, and we simulated this in MATLAB, finally, we completed the physical testing of the overall system.
Network Decomposition and Maximum Independent Set Part Ⅰ: Theoretic Basis
朱松年; 朱嫱
2003-01-01
The structure and characteristics of a connected network are analyzed, and a special kind of sub-network, which can optimize the iteration processes, is discovered. Then, the sufficient and necessary conditions for obtaining the maximum independent set are deduced. It is found that the neighborhood of this sub-network possesses the similar characters, but both can never be allowed incorporated together. Particularly, it is identified that the network can be divided into two parts by a certain style, and then both of them can be transformed into a pair sets network, where the special sub-networks and their neighborhoods appear alternately distributed throughout the entire pair sets network. By use of this characteristic, the network decomposed enough without losing any solutions is obtained. All of these above will be able to make well ready for developing a much better algorithm with polynomial time bound for an odd network in the the application research part of this subject.
Maximum, minimum, and optimal mutation rates in dynamic environments
Ancliff, Mark; Park, Jeong-Man
2009-12-01
We analyze the dynamics of the parallel mutation-selection quasispecies model with a changing environment. For an environment with the sharp-peak fitness function in which the most fit sequence changes by k spin flips every period T , we find analytical expressions for the minimum and maximum mutation rates for which a quasispecies can survive, valid in the limit of large sequence size. We find an asymptotic solution in which the quasispecies population changes periodically according to the periodic environmental change. In this state we compute the mutation rate that gives the optimal mean fitness over a period. We find that the optimal mutation rate per genome, k/T , is independent of genome size, a relationship which is observed across broad groups of real organisms.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Catalan, M.; Martos, Y. M.; Martin-Davila, J.; Munoz-Martin, A.; Carbo, A.; Druet, M.
2015-07-01
This study reviews the state of knowledge in the Iberian Atlantic margin. In order to do this, the margin has been divided into three provinces: the Galicia margin, the southern Iberian abyssal plain, and the Tagus abyssal plain. We have used potential field and sediment thickness data. This has allowed us to study the crust, setting limits for the continental crust domain, and the amplitude of the so-called ocean-continent transition, whose end marks the beginning of the oceanic crust. The study shows the continental crust in the Galician margin to be the widest, about 210 km in length, whilst the ocean-continent transition varies slightly in this province: between 65 km wide in the south and 56 km wide in the north. This result shows up some differences with the hypothesis of other authors. The situation in the southern Iberian abyssal plain is nearly the opposite. Its continental crust extends approximately 60 km, whilst the ocean-continent transition zone is 185 km long. The Tagus abyssal plain study shows a faster morphological evolution than the others, according with the amount of crustal thinning β, the ocean-continent transition domain spanning 100 km. These results support a transitional intermediate character for almost the whole Tagus plain, in contrary to what other authors have stated. (Author)
The tragedy of the margins: land rights and marginal lands in Vietnam (c. 1800-1945)
J. Kleinen
2011-01-01
This article deals with aspects of official land registers in pre-colonial and colonial Vietnam and their relationship with marginal lands since the eleventh century and especially since the beginning of the nineteenth century. The changing pattern of land ownership and control is studied in detail
Talking (and Not Talking) about Race, Social Class and Dis/Ability: Working Margin to Margin
Ferri, Beth A.; Connor, David J.
2014-01-01
In this article we examine some of the omnipresent yet unacknowledged discourses of social and economic disadvantage and dis/ability within schools in the US. First, we document ways that social class, race, and dis/ability function within schools to further disadvantage and exclude already marginalized students. Next, we show how particular ways…
Ono, Toshiaki; Fushimi, Naomasa; Yamada, Kei; Asada, Hideki
2015-01-01
In terms of Sturm's theorem, we reexamine a marginal stable circular orbit (MSCO) such as the innermost stable circular orbit (ISCO) of a timelike geodesic in any spherically symmetric and static spacetime. MSCOs for some of exact solutions to the Einstein's equation are discussed. Strum's theorem is explicitly applied to the Kottler (often called Schwarzschild-de Sitter) spacetime. Moreover, we analyze MSCOs for a spherically symmetric, static and vacuum solution in Weyl conformal gravity.
A maximum power point tracking algorithm for photovoltaic applications
Nelatury, Sudarshan R.; Gray, Robert
2013-05-01
The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.
Maximum mass, moment of inertia and compactness of relativistic stars
Breu, Cosima
2016-01-01
A number of recent works have highlighted that it is possible to express the properties of general-relativistic stellar equilibrium configurations in terms of functions that do not depend on the specific equation of state employed to describe matter at nuclear densities. These functions are normally referred to as "universal relations" and have been found to apply, within limits, both to static or stationary isolated stars, as well as to fully dynamical and merging binary systems. Further extending the idea that universal relations can be valid also away from stability, we show that a universal relation is exhibited also by equilibrium solutions that are not stable. In particular, the mass of rotating configurations on the turning-point line shows a universal behaviour when expressed in terms of the normalised Keplerian angular momentum. In turn, this allows us to compute the maximum mass allowed by uniform rotation, M_{max}, simply in terms of the maximum mass of the nonrotating configuration, M_{TOV}, findi...
Maximum bubble pressure rheology of low molecular mass organogels.
Fei, Pengzhan; Wood, Steven J; Chen, Yan; Cavicchi, Kevin A
2015-01-13
Maximum bubble pressure rheology is used to characterize organogels of 0.25 wt % 12-hydroxystearic acid (12-HSA) in mineral oil, 3 wt % (1,3:2,4) dibenzylidene sorbitol (DBS) in poly(ethylene glycol), and 1 wt % 1,3:2,4-bis(3,4-dimethylbenzylidene) sorbitol (DMDBS) in poly(ethylene glycol). The maximum pressure required to inflate a bubble at the end of capillary inserted in a gel is measured. This pressure is related to the gel modulus in the case of elastic cavitation and the gel modulus and toughness in the case of irreversible fracture. The 12-HSA/mineral oil gels are used to demonstrate that this is a facile technique useful for studying time-dependent gel formation and aging and the thermal transition from a gel to a solution. Comparison is made to both qualitative gel tilting measurements and quantitative oscillatory shear rheology to highlight the utility of this measurement and its complementary nature to oscillatory shear rheology. The DBS and DMDBS demonstrate the generality of this measurement to measure gel transition temperatures.
Exploring the Constrained Maximum Edge-weight Connected Graph Problem
Zhen-ping Li; Shi-hua Zhang; Xiang-Sun Zhang; Luo-nan Chen
2009-01-01
Given an edge weighted graph,the maximum edge-weight connected graph (MECG) is a connected subgraph with a given number of edges and the maximal weight sum.Here we study a special case,i.e.the Constrained Maximum Edge-Weight Connected Graph problem (CMECG),which is an MECG whose candidate subgraphs must include a given set of k edges,then also called the k-CMECG.We formulate the k-CMECG into an integer linear programming model based on the network flow problem.The k-CMECG is proved to be NP-hard.For the special case 1-CMECG,we propose an exact algorithm and a heuristic algorithm respectively.We also propose a heuristic algorithm for the k-CMECG problem.Some simulations have been done to analyze the quality of these algorithms.Moreover,we show that the algorithm for 1-CMECG problem can lead to the solution of the general MECG problem.
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Maximum entropy deconvolution of the optical jet of 3C 273
Evans, I. N.; Ford, H. C.; Hui, X.
1989-01-01
The technique of maximum entropy image restoration is applied to the problem of deconvolving the point spread function from a deep, high-quality V band image of the optical jet of 3C 273. The resulting maximum entropy image has an approximate spatial resolution of 0.6 arcsec and has been used to study the morphology of the optical jet. Four regularly-spaced optical knots are clearly evident in the data, together with an optical 'extension' at each end of the optical jet. The jet oscillates around its center of gravity, and the spatial scale of the oscillations is very similar to the spacing between the optical knots. The jet is marginally resolved in the transverse direction and has an asymmetric profile perpendicular to the jet axis. The distribution of V band flux along the length of the jet, and accurate astrometry of the optical knot positions are presented.
Carriquiry, José; Sanchez, Alberto; Leduc, Guillaume
2015-01-01
International audience; The oxygen and carbon isotopic compositions of benthic foraminiferal tests were measured on sedimentary sequences retrieved on the Magdalena Margin, off southern Baja California, Mexico. We reconstruct the hydrographic changes along the water column that occurred in the northeastern tropical Pacific since the Last Glacial Maximum (LGM) and compare those changes to the ones that occurred in the northwest Pacific (NWP, i.e., off Japan and Russia), in the northeast Pacifi...
Nabel, Moritz; Bueno Piaz Barbosa, Daniela; Horsch, David; Jablonowski, Nicolai David
2014-05-01
The global demand for energy security and the mitigation of climate change are the main drivers pushing energy-plant production in Germany. However, the cultivation of these plants can cause land use conflicts since agricultural soil is mostly used for plant production. A sustainable alternative to the conventional cultivation of food-based energy-crops is the cultivation of special adopted energy-plants on marginal lands. To further increase the sustainability of energy-plant cultivation systems the dependency on synthetic fertilizers needs to be reduced via closed nutrient loops. In the presented study the energy-plant Sida hermaphrodita (Malvaceae) will be used to evaluate the potential to grow this high potential energy-crop on a marginal sandy soil in combination with fertilization via digestate from biogas production. With this dose-response experiment we will further identify an optimum dose, which will be compared to equivalent doses of NPK-fertilizer. Further, lethal doses and deficiency doses will be observed. Two weeks old Sida seedlings were transplanted to 1L pots and fertilized with six doses of digestate (equivalent to a field application of 5, 10, 20, 40, 80, 160t/ha) and three equivalent doses of NPK-fertilizer. Control plants were left untreated. Sida plants will grow for 45 days under greenhouse conditions. We hypothesize that the nutrient status of the marginal soil can be increased and maintained by defined digestate applications, compared to control plants suffering of nutrient deficiency due to the low nutrient status in the marginal substrate. The dose of 40t/ha is expected to give a maximum biomass yield without causing toxicity symptoms. Results shall be used as basis for further experiments on the field scale in a field trial that was set up to investigate sustainable production systems for energy crop production under marginal soil conditions.
Seiler, Christian; Gleadow, Andrew J. W.; Fletcher, John M.; Kohn, Barry P.
2009-07-01
The Ballenas transform margin in central Baja California offers an unparalleled opportunity to study the thermal behaviour of a sheared continental margin during various stages of its evolution. Apatite fission track and (U-Th)/He results from two transects perpendicular to the coast reveal a pronounced latest Pliocene to Pleistocene (~ 1.8 Ma) heating event related to the Neogene opening of the Gulf of California. Proximity to a regional pre-rift unconformity indicates that samples remained at near-surface levels since Paleogene unroofing, despite having experienced reheating to maximum paleotemperatures within or above the fission track partial annealing zone. In general, maximum paleotemperatures during overprinting decrease from > 100-120 °C near the coast to below 60 °C ca. 5-8 km further inland, suggesting lateral heat flow from a source within the Gulf of California. Heat sources related to the structural development of the Ballenas transform fault, located approximately 1.5-4.5 km offshore from the two sample transects, most likely controlled the observed reheating. Overprinting patterns do not support conductive reheating due to reburial, magmatism or frictional shear. Instead, a pronounced thermal spike in between much less overprinted neighbouring samples strongly favours convective heating by hydrothermal fluids as the dominant overprinting process. Hydrothermal activity may be caused by either deep fluid circulation along newly formed shear zones of the transform fault or, more likely, magmatic leaking along the transform fault. Latest Pliocene to Pleistocene (~ 1.8 Ma) activity on the Ballenas transform fault is closely linked to extension in the Lower and Upper Delfín basins and provides a minimum age for the structural reorganisation and the relocation of extension in the northern Gulf of California. This study shows that hydrothermal activity can cause significant thermal events in a transform margin before the passage of the spreading centre
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Evidence for Marginal Stability in Emulsions
Lin, Jie; Jorjadze, Ivane; Pontani, Lea-Laetitia; Wyart, Matthieu; Brujic, Jasna
2016-11-01
We report the first measurements of the effect of pressure on vibrational modes in emulsions, which serve as a model for soft frictionless spheres at zero temperature. As a function of the applied pressure, we find that the density of states D (ω ) exhibits a low-frequency cutoff ω*, which scales linearly with the number of extra contacts per particle δ z . Moreover, for ω Soft Matter 10, 5628 (2014); S. Franz, G. Parisi, P. Urbani, and F. Zamponi, Proc. Natl. Acad. Sci. U.S.A. 112, 14539 (2015)]. Finally, the degree of localization of the softest low frequency modes increases with compression, as shown by the participation ratio as well as their spatial configurations. Overall, our observations show that emulsions are marginally stable and display non-plane-wave modes up to vanishing frequencies.
Regnar -- Development of a marginal field
Thalund, K.M.; Brodersen, F.P.; Roigaard-Petersen, B. [Maersk Olie og Gas AS, Copenhagen (Denmark)
1994-12-31
Regnar is a small marginal field located some 13 km from the main Dan F complex and is the first subsea completion in Danish waters, operated by Maersk Olie og Gas AS. A short lifetime has been predicted for the field which therefore has been developed as a low cost project, using a combination of subsea technology and minimum topside facilities. Regnar consists of a subsea x-mas tree producing through a 6 inch pipeline with a 2 1/2 inch chemical piggyback line to Dan F. The x-mas tree and the subsea choke valve are controlled from a buoy moored nearby the well. The buoy is radio linked to Dan F. The Regnar field was brought on stream on September 26, 1993.
Max-margin based Bayesian classifier
Tao-cheng HU‡; Jin-hui YU
2016-01-01
There is a tradeoff between generalization capability and computational overhead in multi-class learning. We propose a generative probabilistic multi-class classifi er, considering both the generalization capability and the learning/prediction rate. We show that the classifi er has a max-margin property. Thus, prediction on future unseen data can nearly achieve the same performance as in the training stage. In addition, local variables are eliminated, which greatly simplifi es the optimization problem. By convex and probabilistic analysis, an eﬃcient online learning algorithm is developed. The algorithm aggregates rather than averages dualities, which is different from the classical situations. Empirical results indicate that our method has a good generalization capability and coverage rate.
MARGINAL EXPENSE OIL WELL WIRELESS SURVEILLANCE MEOWS
Mason M. Medizade; John R. Ridgely; Donald G. Nelson
2004-11-01
A marginal expense oil well wireless surveillance system to monitor system performance and production from rod-pumped wells in real time from wells operated by Vaquero Energy in the Edison Field, Main Area of Kern County in California has been successfully designed and field tested. The surveillance system includes a proprietary flow sensor, a programmable transmitting unit, a base receiver and receiving antenna, and a base station computer equipped with software to interpret the data. First, the system design is presented. Second, field data obtained from three wells is shown. Results of the study show that an effective, cost competitive, real-time wireless surveillance system can be introduced to oil fields across the United States and the world.
Asymptotic behavior of marginally trapped tubes in spherically symmetric black hole spacetimes
Williams, Catherine M.
We begin by reviewing some fundamental features of general relativity, then outline the mathematical definitions of black holes, trapped surfaces, and marginally trapped tubes, first in general terms, then rigorously in the context of spherical symmetry. We describe explicitly the reduction of Einstein's equation on a spherically symmetric 4-dimensional Lorentzian manifold to a system of partial differential equations on a subset of 2-dimensional Minkowski space. We discuss the asymptotic behavior of marginally trapped tubes in the Schwarzschild, Vaidya, and Reisner-Nordstrom solutions to Einstein's equations in spherical symmetry, as well as in Einstein-Maxwell-scalar field black hole spacetimes generated by evolving certain classes of asymptotically flat initial data. Our first main result gives conditions on a general stress-energy tensor Talphabeta in a spherically symmetric black hole spacetime that are sufficient to guarantee that the black hole will contain a marginally trapped tube which is eventually achronal, connected, and asymptotic to the event horizon. Here "general" means that the matter model is arbitrary, subject only to a certain positive energy condition. A certain matter field decay rate, known as Price law decay in the literature, is not required per se for this asymptotic result, but such decay does imply that the marginally trapped tube has finite length with respect to the induced metric. In our second main result, we give two separate applications of the first theorem to self-gravitating Higgs field spacetimes, one using weak Price law decay, the other certain strong smallness and monotonicity assumptions.
Doerfler, Arnaud; Oitchayomi, Abeni; Tillou, Xavier
2014-11-01
To describe a simple method for ensuring surgical margins during laparoscopic partial nephrectomy (PN). A study was done at our institution from October 2013 to March 2014 for all patients undergoing laparoscopic PN for T1 renal tumors. Before tumor removal, intraoperative ultrasonography (US) localization was performed. The tumor was then removed with a standardized minimal healthy tissue margin technique. Immediately after removal and before performing hemostasis of the kidney, the specimen was placed into a laparoscopic endobag filled with saline solution. The laparoscopic probe was then placed into the endobag and a sequential ultrasonographic scan was performed to evaluate if the tumor's pseudocapsule was respected. Twelve patients were included in our study. Mean warm ischemia time was 19 ± 3 minutes. Mean US examination was 42 ± 9 seconds. US analysis of surgical margins was negative in all except 1 patient. The final histologic examination of all specimens confirmed US results with a 100% correlation. We describe an original, simple, and cost-effective method for ensuring surgical margins during laparoscopic PN with a moderate increase in warm time ischemia. Copyright © 2014 Elsevier Inc. All rights reserved.
Assessment of marginal stability and permeability of an interim restorative endodontic material.
Kazemi, R B; Safavi, K E; Spångberg, L S
1994-12-01
The purpose of this study was to assess the marginal stability and permeability of a new interim restorative endodontic material, Tempit (Centrix Inc., Milford, Conn.), and to compare the findings with the results of two commonly used restorative endodontic materials, Cavit (Premier Dental Products Co., Philadelphia, Pa.) and IRM (Intermediate Restorative Material Capsules, The Caulk Co., Division of Dentsply International Inc., Milford, Del.) This study was performed in several steps. First, the endodontic access cavities were prepared and restored on 80 extracted mandibular molars. The samples were exposed to methylene blue dye solution for 6 days, thermocycled, and sectioned; the dye penetration and diffusion were measured along the margins and into the body of the materials. The second experiment was a special study performed in standardized glass tubes to better evaluate the marginal and body dye penetration into the materials by increasing the length of the fillings. To eliminate the possibility of hygroscopic setting mechanisms of materials, samples were first allowed to set under water before dye was introduced. Cavit and Tempit showed a substantial amount of dye diffusion into the body of the materials. Cavit exhibited the best sealing ability at all times. The marginal and body dye penetration were significantly different for the Tempit material in all experiments than Cavit (p IRM demonstrated the least body penetration of all three materials (p material (p = 0.6 and p = 0.1).(ABSTRACT TRUNCATED AT 250 WORDS)
The genetics of nodal marginal zone lymphoma.
Spina, Valeria; Khiabanian, Hossein; Messina, Monica; Monti, Sara; Cascione, Luciano; Bruscaggin, Alessio; Spaccarotella, Elisa; Holmes, Antony B; Arcaini, Luca; Lucioni, Marco; Tabbò, Fabrizio; Zairis, Sakellarios; Diop, Fary; Cerri, Michaela; Chiaretti, Sabina; Marasca, Roberto; Ponzoni, Maurilio; Deaglio, Silvia; Ramponi, Antonio; Tiacci, Enrico; Pasqualucci, Laura; Paulli, Marco; Falini, Brunangelo; Inghirami, Giorgio; Bertoni, Francesco; Foà, Robin; Rabadan, Raul; Gaidano, Gianluca; Rossi, Davide
2016-09-08
Nodal marginal zone lymphoma (NMZL) is a rare, indolent B-cell tumor that is distinguished from splenic marginal zone lymphoma (SMZL) by the different pattern of dissemination. NMZL still lacks distinct markers and remains orphan of specific cancer gene lesions. By combining whole-exome sequencing, targeted sequencing of tumor-related genes, whole-transcriptome sequencing, and high-resolution single nucleotide polymorphism array analysis, we aimed at disclosing the pathways that are molecularly deregulated in NMZL and we compare the molecular profile of NMZL with that of SMZL. These analyses identified a distinctive pattern of nonsilent somatic lesions in NMZL. In 35 NMZL patients, 41 genes were found recurrently affected in ≥3 (9%) cases, including highly prevalent molecular lesions of MLL2 (also known as KMT2D; 34%), PTPRD (20%), NOTCH2 (20%), and KLF2 (17%). Mutations of PTPRD, a receptor-type protein tyrosine phosphatase regulating cell growth, were enriched in NMZL across mature B-cell tumors, functionally caused the loss of the phosphatase activity of PTPRD, and were associated with cell-cycle transcriptional program deregulation and increased proliferation index in NMZL. Although NMZL shared with SMZL a common mutation profile, NMZL harbored PTPRD lesions that were otherwise absent in SMZL. Collectively, these findings provide new insights into the genetics of NMZL, identify PTPRD lesions as a novel marker for this lymphoma across mature B-cell tumors, and support the distinction of NMZL as an independent clinicopathologic entity within the current lymphoma classification. © 2016 by The American Society of Hematology.
Christov, Ivan C
2012-01-01
In classical continuum physics, a wave is a mechanical disturbance. Whether the disturbance is stationary or traveling and whether it is caused by the motion of atoms and molecules or the vibration of a lattice structure, a wave can be understood as a specific type of solution of an appropriate mathematical equation modeling the underlying physics. Typical models consist of partial differential equations that exhibit certain general properties, e.g., hyperbolicity. This, in turn, leads to the possibility of wave solutions. Various analytical techniques (integral transforms, complex variables, reduction to ordinary differential equations, etc.) are available to find wave solutions of linear partial differential equations. Furthermore, linear hyperbolic equations with higher-order derivatives provide the mathematical underpinning of the phenomenon of dispersion, i.e., the dependence of a wave's phase speed on its wavenumber. For systems of nonlinear first-order hyperbolic equations, there also exists a general ...
An approximate, maximum terminal velocity descent to a point
Eisler, G.R.; Hull, D.G.
1987-01-01
No closed form control solution exists for maximizing the terminal velocity of a hypersonic glider at an arbitrary point. As an alternative, this study uses neighboring extremal theory to provide a sampled data feedback law to guide the vehicle to a constrained ground range and altitude. The guidance algorithm is divided into two parts: 1) computation of a nominal, approximate, maximum terminal velocity trajectory to a constrained final altitude and computation of the resulting unconstrained groundrange, and 2) computation of the neighboring extremal control perturbation at the sample value of flight path angle to compensate for changes in the approximate physical model and enable the vehicle to reach the on-board computed groundrange. The trajectories are characterized by glide and dive flight to the target to minimize the time spent in the denser parts of the atmosphere. The proposed on-line scheme successfully brings the final altitude and range constraints together, as well as compensates for differences in flight model, atmosphere, and aerodynamics at the expense of guidance update computation time. Comparison with an independent, parameter optimization solution for the terminal velocity is excellent. 6 refs., 3 figs.
Toner, J. D.; Catling, D. C.; Light, B.
2014-05-01
Salt solutions on Mars can stabilize liquid water at low temperatures by lowering the freezing point of water. The maximum equilibrium freezing-point depression possible, known as the eutectic temperature, suggests a lower temperature limit for liquid water on Mars; however, salt solutions can supercool below their eutectic before crystallization occurs. To investigate the magnitude of supercooling and its variation with salt composition and concentration, we performed slow cooling and warming experiments on pure salt solutions and saturated soil-solutions of MgSO4, MgCl2, NaCl, NaClO4, Mg(ClO4)2, and Ca(ClO4)2. By monitoring solution temperatures, we identified exothermic crystallization events and determined the composition of precipitated phases from the eutectic melting temperature. Our results indicate that supercooling is pervasive. In general, supercooling is greater in more concentrated solutions and with salts of Ca and Mg. Slowly cooled MgSO4, MgCl2, NaCl, and NaClO4 solutions investigated in this study typically supercool 5-15 °C below their eutectic temperature before crystallizing. The addition of soil to these salt solutions has a variable effect on supercooling. Relative to the pure salt solutions, supercooling decreases in MgSO4 soil-solutions, increases in MgCl2 soil-solutions, and is similar in NaCl and NaClO4 soil-solutions. Supercooling in MgSO4, MgCl2, NaCl, and NaClO4 solutions could marginally extend the duration of liquid water during relatively warm daytime temperatures in the martian summer. In contrast, we find that Mg(ClO4)2 and Ca(ClO4)2 solutions do not crystallize during slow cooling, but remain in a supercooled, liquid state until forming an amorphous glass near -120 °C. Even if soil is added to the solutions, a glass still forms during cooling. The large supercooling effect in Mg(ClO4)2 and Ca(ClO4)2 solutions has the potential to prevent water from freezing over diurnal and possibly annual cycles on Mars. Glasses are also
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Efeoglu, Arkin; Møller, Charles; Serie, Michel
2013-01-01
This paper outlines an artifact building and evaluation proposal. Design Science Research (DSR) studies usually consider encapsulated artifact that have relationships with other artifacts. The solution prototype as a composed artifact demands for a more comprehensive consideration in its systematic...... environment. The solution prototype that is composed from blending product and service prototype has particular impacts on the dualism of DSR’s “Build” and “Evaluate”. Since the mix between product and service prototyping can be varied, there is a demand for a more agile and iterative framework. Van de Ven...
Soliman, Sebastian; Preidl, Reinhard; Karl, Sabine; Hofmann, Norbert; Krastl, Gabriel; Klaiber, Bernd
2016-01-01
To investigate the influence of three cavity designs on the marginal seal of large Class II cavities restored with low-shrinkage resin composite limited to the enamel. One hundred twenty (120) intact human molars were randomly divided into 12 groups, with three different cavity designs: 1. undermined enamel, 2. box-shaped, and 3. proximal bevel. The teeth were restored with 1. an extra-low shrinkage (ELS) composite free of diluent monomers, 2. microhybrid composite (Herculite XRV), 3. nanohybrid composite (Filtek Supreme XTE), and 4. silorane-based composite (Filtek Silorane). After artificial aging by thermocycling and storage in physiological saline, epoxy resin replicas were prepared. To determine the integrity of the restorations' approximal margins, two methods were sequentially employed: 1. replicas were made of the 120 specimens and examined using SEM, and 2. the same 120 specimens were immersed in AgNO3 solution, and the dye penetration depth was observed with a light microscope. Statistical analysis was performed using the Kruskal-Wallis and the Dunn-Bonferroni tests. After bevel preparation, SEM observations showed that restorations did not exhibit a higher percentage of continuous margin (SEM-analysis; p>0.05), but more leakage was found than with the other cavity designs (pcomposite restorations and is no longer recommended. However, undermined enamel should be removed to prevent enamel fractures.
MAXIMUM PRINCIPLES OF NONHOMOGENEOUS SUBELLIPTIC P-LAPLACE EQUATIONS AND APPLICATIONS
Liu Haifeng; Niu Pengcheng
2006-01-01
Maximum principles for weak solutions of nonhomogeneous subelliptic p-Laplace equations related to smooth vector fields {Xj} satisfying the H(o)rmander condition are proved by the choice of suitable test functions and the adaption of the classical Moser iteration method. Some applications are given in this paper.
On the Existence and Uniqueness of Maximum-Likelihood Estimates in the Rasch Model.
Fischer, Gerhard H.
1981-01-01
Necessary and sufficient conditions for the existence and uniqueness of a solution of the so-called "unconditional" and the "conditional" maximum-likelihood estimation equations in the dichotomous Rasch model are given. It is shown how to apply the results in practical uses of the Rasch model. (Author/JKS)
Cetin, Bilge Kartal; Prasad, Neeli R.; Prasad, Ramjee
2011-01-01
of the maximum lifetime routing problem that considers the operation modes of the node. Solution of the linear programming gives the upper analytical bound for the network lifetime. In order to illustrate teh application of the optimization model, we solved teh problem for different parameter settings...
CytoMCS: A Multiple Maximum Common Subgraph Detection Tool for Cytoscape
Larsen, Simon; Baumbach, Jan
2017-01-01
such analyses we have developed CytoMCS, a Cytoscape app for computing inexact solutions to the maximum common edge subgraph problem for two or more graphs. Our algorithm uses an iterative local search heuristic for computing conserved subgraphs, optimizing a squared edge conservation score that is able...
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Georgiades, Christos, E-mail: g_christos@hotmail.com [Johns Hopkins University, Department of Vascular and Interventional Radiology, Interventional Radiology Center (United States); Rodriguez, Ronald, E-mail: rrodrig@jhmi.edu [Johns Hopkins University, Department of Urology (United States); Azene, Ezana, E-mail: eazene1@jhmi.edu; Weiss, Clifford, E-mail: cweiss@jhmi.edu [Johns Hopkins University, Department of Vascular and Interventional Radiology, Interventional Radiology Center (United States); Chaux, Alcides, E-mail: achaux1@jhmi.edu; Gonzalez-Roibon, Nilda, E-mail: ngonzal6@jhmi.edu; Netto, George, E-mail: gnetto1@jhmi.edu [Johns Hopkins University, Department of Urologic Pathology (United States)
2013-06-15
Objective. The study was designed to determine the distance between the visible 'ice-ball' and the lethal temperature isotherm for normal renal tissue during cryoablation. Methods. The Animal Care Committee approved the study. Nine adult swine were used: three to determine the optimum tissue stain and six to test the hypotheses. They were anesthetized and the left renal artery was catheterized under fluoroscopy. Under MR guidance, the kidney was ablated and (at end of a complete ablation) the nonfrozen renal tissue (surrounding the 'ice-ball') was stained via renal artery catheter. Kidneys were explanted and sent for slide preparation and examination. From each slide, we measured the maximum, minimum, and an in-between distance from the stained to the lethal tissue boundaries (margin). We examined each slide for evidence of 'heat pump' effect. Results. A total of 126 measurements of the margin (visible 'ice-ball'-lethal margin) were made. These measurements were obtained from 29 slides prepared from the 6 test animals. Mean width was 0.75 {+-} 0.44 mm (maximum 1.15 {+-} 0.51 mm). It was found to increase adjacent to large blood vessels. No 'heat pump' effect was noted within the lethal zone. Data are limited to normal swine renal tissue. Conclusions. Considering the effects of the 'heat pump' phenomenon for normal renal tissue, the margin was measured to be 1.15 {+-} 0.51 mm. To approximate the efficacy of the 'gold standard' (partial nephrectomy, {approx}98 %), a minimum margin of 3 mm is recommended (3 Multiplication-Sign SD). Given these assumptions and extrapolating for renal cancer, which reportedly is more cryoresistant with a lethal temperature of -40 Degree-Sign C, the recommended margin is 6 mm.
Analysis of System Margins on Missions Utilizing Solar Electric Propulsion
Oh, David Y.; Landau, Damon; Randolph, Thomas; Timmerman, Paul; Chase, James; Sims, Jon; Kowalkowski, Theresa
2008-01-01
NASA's Jet Propulsion Laboratory has conducted a study focused on the analysis of appropriate margins for deep space missions using solar electric propulsion (SEP). The purpose of this study is to understand the links between disparate system margins (power, mass, thermal, etc.) and their impact on overall mission performance and robustness. It is determined that the various sources of uncertainty and risk associated with electric propulsion mission design can be summarized into three relatively independent parameters 1) EP Power Margin, 2) Propellant Margin and 3) Duty Cycle Margin. The overall relationship between these parameters and other major sources of uncertainty is presented. A detailed trajectory analysis is conducted to examine the impact that various assumptions related to power, duty cycle, destination, and thruster performance including missed thrust periods have on overall performance. Recommendations are presented for system margins for deep space missions utilizing solar electric propulsion.
Farnsworth, L. B.; Kelly, M. A.; Axford, Y.; Bromley, G. R.; Osterberg, E. C.; Howley, J. A.; Zimmerman, S. R. H.; Jackson, M. S.; Lasher, G. E.; McFarlin, J. M.
2015-12-01
Defining the late glacial and Holocene fluctuations of the Greenland Ice Sheet (GrIS) margin, particularly during periods that were as warm or warmer than present, provides a longer-term perspective on present ice margin fluctuations and informs how the GrIS may respond to future climate conditions. We focus on mapping and dating past GrIS extents in the Nunatarssuaq region of northwestern Greenland. During the summer of 2014, we conducted geomorphic mapping and collected rock samples for 10Be surface exposure dating as well as subfossil plant samples for 14C dating. We also obtained sediment cores from an ice-proximal lake. Preliminary 10Be ages of boulders deposited during deglaciation of the GrIS subsequent to the Last Glacial Maximum range from ~30-15 ka. The apparently older ages of some samples indicate the presence of 10Be inherited from prior periods of exposure. These ages suggest deglaciation occurred by ~15 ka however further data are needed to test this hypothesis. Subfossil plants exposed at the GrIS margin on shear planes date to ~ 4.6-4.8 cal. ka BP and indicate less extensive ice during middle Holocene time. Additional radiocarbon ages from in situ subfossil plants on a nunatak date to ~3.1 cal. ka BP. Geomorphic mapping of glacial landforms near Nordsø, a large proglacial lake, including grounding lines, moraines, paleo-shorelines, and deltas, indicate the existence of a higher lake level that resulted from a more extensive GrIS margin likely during Holocene time. A fresh drift limit, characterized by unweathered, lichen-free clasts approximately 30-50 m distal to the modern GrIS margin, is estimated to be late Holocene in age. 10Be dating of samples from these geomorphic features is in progress. Radiocarbon ages of subfossil plants exposed by recent retreat of the GrIS margin suggest that the GrIS was at or behind its present location at AD ~1650-1800 and ~1816-1889. Results thus far indicate that the GrIS margin in northwestern Greenland
Environmental Knowledge and Marginalized Communities: The Last Mile Connectivity
Greg Chester
2006-03-01
Full Text Available Expanding globalization implies, among other things, growing interdependence among peoples of the world. The convergence of information and communication technologies (ICTs is enabling almost seamless access to a vast and varied range of information and knowledge sources from anywhere at any time. These are features of the emerging knowledge society. However, a substantial proportion of the marginalized communities in most developing countries and even in some of the technologically advanced countries do not appear to be benefiting from these developments. They do not feel participating in and contributing to the society at large. Yet they possess valuable knowledge about nature and its offerings, ethnic, cultural, and spiritual values that can benefit societies beyond their own communities. These communities suffer from several types of handicaps - low literacy, multiplicity of dialects, vulnerability to external exploitation, etc. There are also several impediments to communicating and introducing new ideas, innovations, and technologies into these communities. All these need to be examined and necessary measures and strategies adopted at local, national and international levels to overcome these barriers. Extending ICTs per se to these communities is not a solution. Human intervention is necessary to solve the last mile problem. Illustrative case studies of problems and issues and initiatives undertaken in different countries are briefly described.
Investigation of the maximum amplitude increase from the Benjamin-Feir instability
Karjanto, N; Peterson, P
2011-01-01
The Nonlinear Schr\\"odinger (NLS) equation is used to model surface waves in wave tanks of hydrodynamic laboratories. Analysis of the linearized NLS equation shows that its harmonic solutions with a small amplitude modulation have a tendency to grow exponentially due to the so-called Benjamin-Feir instability. To investigate this growth in detail, we relate the linearized solution of the NLS equation to a fully nonlinear, exact solution, called soliton on finite background. As a result, we find that in the range of instability the maximum amplitude increase is finite and can be at most three times the initial amplitude.
Margin Requirements and Portfolio Optimization: A Geometric Approach
Sheng Guo
2014-01-01
Using geometric illustrations, we investigate what implications of portfolio optimization in equilibrium can be generated by the simple mean-variance framework, under margin borrowing restrictions. First, we investigate the case of uniform marginability on all risky assets. It is shown that changing from unlimited borrowing to margin borrowing shifts the market portfolio to a riskier combination, accompanied by a higher risk premium and a lower price of risk. With the linear risk-return prefe...
Marginal pricing of transmission services. An analysis of cost recovery
Perez-Arriaga, I.J.., Rubio, F.J. [Instituto de Investigacion Technologica, Universidad Pontificia Comillas, Madrid (Spain); Puerta, J.F.; Arceluz, J.; Marin, J. [Unidad de Planificacion Estrategica, Iberdrola, Madrid (Spain)
1996-12-31
The authors present an in-depth analysis of network revenues that are computed with marginal pricing, and investigate the reasons why marginal prices in actual power systems fail to recover total incurred network costs. The major causes of the failure are identified and illustrated with numerical examples. The paper analyzes the regulatory implications of marginal network pricing in the context of competitive electricity markets and provides suggestions for the meaningful allocation of network costs among users. 5 figs., 9 tabs., 8 refs.
Convex games, clan games, and their marginal games
Branzei , Rodica; Dimitrov, Dinko; Tijs, Stef
2005-01-01
We provide characterizations of convex games and total clan games by using properties of their corresponding marginal games. As it turns out, a cooperative game is convex if and only if all its marginal games are superadditive, and a monotonic game satisfying the veto player property with respect to the members of a coalition C is a total clan game (with clan C) if and only if all its C-based marginal games are subadditive.
Ex vivo ultrasound control of resection margins during partial nephrectomy.
Doerfler, Arnaud; Cerantola, Yannick; Meuwly, Jean-Yves; Lhermitte, Benoît; Bensadoun, Henri; Jichlinski, Patrice
2011-12-01
Surgery remains the treatment of choice for localized renal neoplasms. While radical nephrectomy was long considered the gold standard, partial nephrectomy has equivalent oncological results for small tumors. The role of negative surgical margins continues to be debated. Intraoperative frozen section analysis is expensive and time-consuming. We assessed the feasibility of intraoperative ex vivo ultrasound of resection margins in patients undergoing partial nephrectomy and its correlation with margin status on definitive pathological evaluation. A study was done at 2 institutions from February 2008 to March 2011. Patients undergoing partial nephrectomy for T1-T2 renal tumors were included in analysis. Partial nephrectomy was done by a standardized minimal healthy tissue margin technique. After resection the specimen was kept in saline and tumor margin status was immediately determined by ex vivo ultrasound. Sequential images were obtained to evaluate the whole tumor pseudocapsule. Results were compared with margin status on definitive pathological evaluation. A total of 19 men and 14 women with a mean ± SD age of 62 ± 11 years were included in analysis. Intraoperative ex vivo ultrasound revealed negative surgical margins in 30 cases and positive margins in 2 while it could not be done in 1. Final pathological results revealed negative margins in all except 1 case. Ultrasound sensitivity and specificity were 100% and 97%, respectively. Median ultrasound duration was 1 minute. Mean tumor and margin size was 3.6 ± 2.2 cm and 1.5 ± 0.7 mm, respectively. Intraoperative ex vivo ultrasound of resection margins in patients undergoing partial nephrectomy is feasible and efficient. Large sample studies are needed to confirm its promising accuracy to determine margin status. Copyright Â© 2011 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Boost invariant marginally trapped surfaces in Minkowski 4-space
Haesen, S [Department of Mathematics, Katholieke Universiteit Leuven, Celestijnenlaan 200B, 3001 Heverlee (Belgium); Ortega, M [Departamento de GeometrIa y TopologIa, Universidad de Granada, 18071 Granada (Spain)
2007-11-21
The extremal and partly marginally trapped surfaces in the Minkowski 4-space, which are invariant under the group of boost isometries, are classified. Moreover, it is shown that there do not exist extremal surfaces of this kind with constant Gaussian curvature. A procedure is given in order to construct a partly marginally trapped surface by gluing two marginally trapped surfaces which are invariant under the group of boost isometries. As an application, a proper star-surface is constructed.
Geoghegan, Michael W
2005-01-01
Podcasting is the art of recording radio show style audio tracks, then distributing them to listeners on the Web via podcasting software such as iPodder. From downloading podcasts to producing a track for fun or profit, ""Podcast Solutions"" covers the entire world of podcasting with insight, humor, and the unmatched wisdom of experience.
Sonneveld, C.; Voogt, W.
2009-01-01
The characteristics of the soil solution in the root environment in the greenhouse industry differ much from those for field grown crops. This is caused firstly by the growing conditions in the greenhouse, which strongly differ from those in the field and secondly the function attributed to the soil
Reza A Zoroofi
2007-08-01
Full Text Available Medal Electronic (ME Engineering Company provides high quality systems, software and services in medical image management, processing and visualization. We assist health care professionals to improve and extend the efficiency of their practices with cost effective solutions. ME is the developer of several medical software including MEDAL-PACS, 3D-Sonosoft, Analytical-Electrophoresis, CBONE and Rhino-Plus. ME is also the exclusive distributor of PACSPLUS in Iran. PACSPLUS is an international, standard, scalable and enterprise PACS solution. PACSPLUS is of ISO, CE and FDA-510 approvals. It is now operational in more than 1000 clinical environment throughout the globe. We discuss about the key features of PACSPLUS system for dealing with real world challenge in PACS as well as the PACS solu-tions needed to fulfill the demands of the clinicians in Iran. Our experience in developing high-end medical software confirms our capability in providing the PACSPLUS as an ultimate PACS solution in Iran.
Tomashevskiy, L.P.; Boldin, V.M.; Borovikov, P.A.; Fedorova, G.G.; Koshelova, I.F.; Krivoshchekova, N.P.; Prokhorevich, L.D.; Prudnikova, N.N.; Vin, L.R.
1982-01-01
This solution is designed to quickly harden in a cool environment. Phenoformaldyhyde tar is used as a hardening agent along with a modified diethyleneglycol in the amounts of (part by weight): phenoformaldyhyde tar and diethyleneglycol=1oo; acidic hardener=8-16; water=2-4.
LU LING
2010-01-01
@@ World Expo's China Pavilion is a large crimson building,but it's green at heart.The pavilion,a magnificent symbol of Chinese culture,is also a "green landmark" on the world stage,thanks to German company Siemens' energy-saving solutions.
Marginal pricing of transmission services: An analysis of cost recovery
Perez-Arriaga, I.J.; Rubio, F.J. [Univ. Pontificia Comillas, Madrid (Spain); Puerta, J.F.; Arceluz, J.; Marin, J. [IBERDROLA, Bilbao (Spain). Unidad de Planificacion Estrategica
1995-02-01
This paper presents an in-depth analysis of network revenues computed with marginal pricing, and in particular it investigates the reasons why marginal prices fail to recover the total incurred network costs in actual power systems. The basic theoretical results are presented and the major causes of the mismatch between network costs and marginal revenues are identified and illustrated with numerical examples, some tutorial and others of realistic size. The regulatory implications of marginal network pricing in the context of competitive electricity markets are analyzed, and suggestions are provided for the meaningful allocation of the costs of the network among its users.
The many ways to be marginal in a group.
Ellemers, Naomi; Jetten, Jolanda
2013-02-01
Previous theory and research primarily address marginal group members on the path to achieve core membership status. The authors argue that these only represent one form of marginality and that there are many other ways to be marginal within the group. The authors develop a dynamic model in which marginality is conceptualized as resulting from group and individual negotiation about inclusion (the Marginality as Resulting From Group and Individual Negotiation About Inclusion [MARGINI] model), and where individual and group inclusion goals can converge (resulting in relatively stable forms of marginality) or diverge (resulting in less stable forms of marginality). When the marginal position is unstable, individuals can either be motivated to move toward or move further away from the group, and such changing inclusion goals are associated with different emotions and behaviors. The authors argue that one needs to understand the interplay between individual and group inclusion goals to predict and explain the full complexity and diversity of the behavior of marginal group members.
Comparative biogeochemistry-ecosystem-human interactions on dynamic continental margins
Levin, Lisa A.; Liu, Kon-Kee; Emeis, Kay-Christian; Breitburg, Denise L.; Cloern, James; Deutsch, Curtis; Giani, Michele; Goffart, Anne; Hofmann, Eileen E.; Lachkar, Zouhair; Limburg, Karin; Liu, Su-Mei; Montes, Enrique; Naqvi, Wajih; Ragueneau, Olivier; Rabouille, Christophe; Sarkar, Santosh Kumar; Swaney, Dennis P.; Wassman, Paul; Wishner, Karen F.
2014-01-01
The ocean’s continental margins face strong and rapid change, forced by a combination of direct human activity, anthropogenic CO2-induced climate change, and natural variability. Stimulated by discussions in Goa, India at the IMBER IMBIZO III, we (1) provide an overview of the drivers of biogeochemical variation and change on margins, (2) compare temporal trends in hydrographic and biogeochemical data across different margins (3) review ecosystem responses to these changes, (4) highlight the importance of margin time series for detecting and attributing change and (5) examine societal responses to changing margin biogeochemistry and ecosystems. We synthesize information over a wide range of margin settings in order to identify the commonalities and distinctions among continental margin ecosystems. Key drivers of biogeochemical variation include long-term climate cycles, CO2-induced warming, acidification, and deoxygenation, as well as sea level rise, eutrophication, hydrologic and water cycle alteration, changing land use, fishing, and species invasion. Ecosystem responses are complex and impact major margin services including primary production, fisheries production, nutrient cycling, shoreline protection, chemical buffering, and biodiversity. Despite regional differences, the societal consequences of these changes are unarguably large and mandate coherent actions to reduce, mitigate and adapt to multiple stressors on continental margins.
Marginal distortion of thermally incompatible metal ceramic crowns with overextended margins.
Nakamura, Y; Anusavice, K J
1998-01-01
The present study tested the hypothesis that metal ceramic crowns with a varying axial height are more susceptible to marginal distortion during mechanical and thermal processing treatments than crowns with a uniform axial height. Copings of Pd-Cu-Ga alloy with buccal margin extensions of 0, 1.5, and 3.0 mm were prepared. Oxidized copings were veneered with experimental opaque porcelain with a mean thermal contraction coefficient (25 degrees C to 500 degrees C) that was either 2.1 ppm/degree C below (delta alpha = +2.1 ppm/degree C) or 0.1 ppm/degree C above (delta alpha = -0.1 ppm/degree C) that of the alloy. Nine groups of six specimens each were prepared for analysis. Eighteen copings from these 54 specimens were used as porcelain-free controls. All specimens were subjected to a 10-step procedure including grinding, oxidation, firing of four opaque porcelain layers (O1: 0.15 mm; O2: 0.15 mm; O3: 0.5 mm; O4: 0.5 mm), glazing, abrasive blasting for 15 seconds, removal of ceramic by dissolution in hydrogen fluoride, and a postannealing treatment. The control specimens were also subjected to this procedure with the exception of the firing of four layers of porcelain, which were not applied. Marginal gap width was determined using a measuring microscope at a magnification of 30x. Analysis of variance revealed a significant difference in mean gap width as a function of axial length. The largest gap change was associated with a 3.0-mm buccal extension and the negative mismatch condition (delta alpha < 0). Marginal distortion of crowns decreases as the axial length becomes more uniform. Analysis of crown distortion based on differences in the mean contraction coefficients of metal and porcelain alone is not recommended because it ignores the effects of metal grinding, metal sandblasting, and transient stress.
K K Ajay; A K Chaubey; K S Krishna; D Gopala Rao; D Sar
2010-12-01
Multi-channel seismic reﬂection proﬁles across the southwest continental margin of India (SWCMI) show presence of westerly dipping seismic reﬂectors beneath sedimentary strata along the western ﬂank of the Laccadive Ridge –northernmost part of the Chagos –Laccadive Ridge system. Velocity structure, seismic character, 2D gravity model and geographic locations of the dipping reﬂectors suggest that these reﬂectors are volcanic in origin, which are interpreted as Seaward Dipping Reﬂectors (SDRs). The SDRs; 15 to 27 km wide overlain by ∼1 km thick sediment; are observed at three locations and characterized by stack of laterally continuous, divergent and off-lapping reﬂectors. Occurrence of SDRs along western ﬂank of the Laccadive Ridge adjacent to oceanic crust of the Arabian Basin and 2D crustal model deduced from free-air gravity anomaly suggest that they are genetically related to incipient volcanism during separation of Madagascar from India. We suggest that (i)SWCMI is a volcanic passive margin developed during India –Madagascar breakup in the Late Cretaceous, and (ii)continent –ocean transition lies at western margin of the Laccadive Ridge, west of feather edge of the SDRs. Occurrence of SDRs on western ﬂank of the Laccadive Ridge and inferred zone of transition from continent to ocean further suggest continental nature of crust of the Laccadive Ridge.
Heat flow in northwest Pacific marginal seas
JIANG Lili; LI Guanbao; LI Naisheng
2004-01-01
Heat flow studies in Northwest Pacific marginal seas has a more than 40 years history with more than 4000 heat flow values obtained. The regional average value is 80.4 mW/m2, which is lower than the world's 87 mW/m2, but higher than those of the Eurasia continent and the Pacific Ocean. This reflects the regional crust property in the area. The studies on distribution of the heat flow and contour pattern of heat flow in 1°×1°and 2°×2°scales in Northwest Pacific marginal seas revealed that the most high heat flow anomalies in the area were found along back-arc basins and island arc in an obviously northeasterly track. Exceptions are the Komandoskaya Basin (KMB), the Izu-Bonin Trough (IBT) and the Mariana Trough (MT), which extend in northwest. The contours of low heat flow marked the boundaries of the continent and the ocean. The present heat flow values reflect the imprint of the last thermal event and relate closely to tectonic activity. The high heat flow gradient areas have high frequency of earthquake. Therefore, the area of faulting controlled the pattern of the heat flow anomalies. Heat flow gradient in 135°direction indicated a major lithosphere transformation oceanward resulting from movement of the earth's material. In this paper, we described patterns of heat flow distribution in the Northwest Pacific, heat flow value changes in horizontal and vertical directions, combining the studies of Shi (1997) on the landforms of the island arcs in east Asia and plate movement, and the results of Shi and Zhang (1998) on heat simulation of subduction of active ocean mountain and the activity of islands arc. A preliminary model of geodynamics in the Northwest Pacific and its adjacent area was put forward. There is a great lateral heat flow gradient on the surface of the mantle between ocean and continent, which indicates that the materials in asthenosphere move from continent to ocean causing movement of the crust.
OLIVEIRA Fabiana Sodré de
1999-01-01
Full Text Available The marginal microleakage of class II amalgam restorations (Dispersalloy associated with copal varnish (Copalite and with two dentin bonding agents (Scotchbond Multi-uso Plus and Multi Bond Alpha was evaluated in vitro and compared by two methods: scores and linear measurements. Forty-five sound premolars were used, on which two separated class II cavities were prepared on the M and D surfaces. After the restoration, the specimens were thermocycled and stored in a solution of 0.5% basic fuchsin during 24 hours. The analysis allowed to conclude that none of the three restorative systems were able to eliminate the marginal microleakage. Nevertheless, the leakage was significantly smaller on the restorations associated with dentin bonding agents when compared to copal varnish. The linear measurement method was more sensitive than the score criteria.
Maximum Rate of Growth of Enstrophy in Solutions of the Fractional Burgers Equation
Yun, Dongfang
2016-01-01
This investigation is a part of a research program aiming to characterize the extreme behavior possible in hydrodynamic models by probing the sharpness of estimates on the growth of certain fundamental quantities. We consider here the rate of growth of the classical and fractional enstrophy in the fractional Burgers equation in the subcritical, critical and supercritical regime. First, we obtain estimates on these rates of growth and then show that these estimates are sharp up to numerical prefactors. In particular, we conclude that the power-law dependence of the enstrophy rate of growth on the fractional dissipation exponent has the same global form in the subcritical, critical and parts of the supercritical regime. This is done by numerically solving suitably defined constrained maximization problems and then demonstrating that for different values of the fractional dissipation exponent the obtained maximizers saturate the upper bounds in the estimates as the enstrophy increases. In addition, nontrivial be...
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...