G. Munhoven
2009-06-01
Full Text Available Many sensitivity studies have been carried out, using climate models of different degrees of complexity to test the climate response to Last Glacial Maximum boundary conditions. Here, instead of adding the forcings successively as in most previous studies, we applied the separation method of U. Stein et P. Alpert 1993, in order to determine rigorously the different contributions of the boundary condition modifications, and isolate the pure contributions from the interactions among the forcings. We carried out a series of sensitivity experiments with the model of intermediate complexity Planet Simulator, investigating the contributions of the ice sheet expansion and elevation, the lowering of the atmospheric CO2 and of the vegetation cover change on the LGM climate. The separation of the ice cover and orographic contributions shows that the ice albedo effect is the main contributor to the cooling of the Northern Hemisphere, whereas orography has only a local cooling impact over the ice sheets. The expansion of ice cover in the Northern Hemisphere causes a disruption of the tropical precipitation, and a southward shift of the ITCZ. The orographic forcing mainly contributes to the disruption of the atmospheric circulation in the Northern Hemisphere, leading to a redistribution of the precipitation, but weakly impacts the tropics. The isolated vegetation contribution also induces strong cooling over the continents of the Northern Hemisphere that further affects the tropical precipitation and reinforce the southward shift of the ITCZ, when combined with the ice forcing. The combinations of the forcings generate many non-linear interactions that reinforce or weaken the pure contributions, depending on the climatic mechanism involved, but they are generally weaker than the pure contributions. Finally, the comparison between the LGM simulated climate and climatic reconstructions over Eurasia suggests that our results reproduce well the south-west to
G. Munhoven
2009-01-01
Full Text Available Many sensitivity studies have been carried out, using simplified GCMs to test the climate response to Last Glacial Maximum boundary conditions. Here, instead of adding the forcings successively as in previous studies, we applied the separation method of Stein and Alpert (1993, in order to determine rigourously the different contributions of the boundary condition modifications, and isolate the pure contributions from the interactions among the forcings. We carried out a series of sensitivity experiments with the model of intermediate complexity Planet Simulator, investigating the contributions of the ice sheet expansion and elevation, the lowering of the atmospheric CO2 and of the vegetation cover change on the LGM climate. The results clearly identify the ice cover forcing as the main contributor to the cooling of the Northern Hemisphere, and also to the tropical precipitation disruption, leading to the shouthward shift of the ITCZ, while the orographic forcing mainly contributes to the disruption of the atmospheric circulation in the Northern Hemisphere. The isolated vegetation contribution also induces strong cooling over the continents of the Northern Hemisphere, that is further sufficient to affect the tropical precipitation and reinforce the southwards shift of the ITCZ, when combined with the ice forcing. The combinations of the forcings generate many non linear interactions, that reinforce or weaken the pure contributions, depending on the climatic mechanism involved, but they are generally weaker than the pure contributions. Finally, the comparison between the LGM simulated climate and climatic reconstructions over Eurasia suggests that our results reproduce well the south-west to north-east temperature gradients over Eurasia.
On the Threshold of Maximum-Distance Separable Codes
Kindarji, Bruno; Chabanne, Hervé
2010-01-01
Starting from a practical use of Reed-Solomon codes in a cryptographic scheme published in Indocrypt'09, this paper deals with the threshold of linear $q$-ary error-correcting codes. The security of this scheme is based on the intractability of polynomial reconstruction when there is too much noise in the vector. Our approach switches from this paradigm to an Information Theoretical point of view: is there a class of elements that are so far away from the code that the list size is always superpolynomial? Or, dually speaking, is Maximum-Likelihood decoding almost surely impossible? We relate this issue to the decoding threshold of a code, and show that when the minimal distance of the code is high enough, the threshold effect is very sharp. In a second part, we explicit lower-bounds on the threshold of Maximum-Distance Separable codes such as Reed-Solomon codes, and compute the threshold for the toy example that motivates this study.
Maximum Likelihood Factor Structure of the Family Environment Scale.
Fowler, Patrick C.
1981-01-01
Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)
Heteroscedastic one-factor models and marginal maximum likelihood estimation
Hessen, D.J.; Dolan, C.V.
2009-01-01
In the present paper, a general class of heteroscedastic one-factor models is considered. In these models, the residual variances of the observed scores are explicitly modelled as parametric functions of the one-dimensional factor score. A marginal maximum likelihood procedure for parameter estimati
Maximum Time Separation of Events in Cyclic Systems with Linear and Latest Timing Constraints
Jin, Fen; Hulgaard, Henrik; Cerny, Eduard
1998-01-01
The determination of the maximum time separations of events is important in the design, synthesis, and verification of digital systems, especially in interface timing verification. Many researchers have explored solutions to the problem with various restrictions: a) on the type of constraints......, and b) on whether the events in the specification are allowed to occur repeatedly. When the events can occur only once, the problem is well solved. There are fewer concrete results for systems where the events can occur repeatedly. We extend the work by Hulgaard et al.\\ for computing the maximum...
Minimizing Maximum Response Time and Delay Factor in Broadcast Scheduling
Chekuri, Chandra; Moseley, Benjamin
2009-01-01
We consider online algorithms for pull-based broadcast scheduling. In this setting there are n pages of information at a server and requests for pages arrive online. When the server serves (broadcasts) a page p, all outstanding requests for that page are satisfied. We study two related metrics, namely maximum response time (waiting time) and maximum delay-factor and their weighted versions. We obtain the following results in the worst-case online competitive model. - We show that FIFO (first-in first-out) is 2-competitive even when the page sizes are different. Previously this was known only for unit-sized pages [10] via a delicate argument. Our proof differs from [10] and is perhaps more intuitive. - We give an online algorithm for maximum delay-factor that is O(1/eps^2)-competitive with (1+\\eps)-speed for unit-sized pages and with (2+\\eps)-speed for different sized pages. This improves on the algorithm in [12] which required (2+\\eps)-speed and (4+\\eps)-speed respectively. In addition we show that the algori...
Mohammad H. Radfar
2006-11-01
Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.
Dansereau Richard M
2007-01-01
Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.
METHOD FOR DETERMINING THE MAXIMUM ARRANGEMENT FACTOR OF FOOTWEAR PARTS
DRIŞCU Mariana
2014-05-01
Full Text Available By classic methodology, designing footwear is a very complex and laborious activity. That is because classic methodology requires many graphic executions using manual means, which consume a lot of the producer’s time. Moreover, the results of this classical methodology may contain many inaccuracies with the most unpleasant consequences for the footwear producer. Thus, the costumer that buys a footwear product by taking into consideration the characteristics written on the product (size, width can notice after a period that the product has flaws because of the inadequate design. In order to avoid this kind of situations, the strictest scientific criteria must be followed when one designs a footwear product. The decisive step in this way has been made some time ago, when, as a result of powerful technical development and massive implementation of electronical calculus systems and informatics, This paper presents a product software for determining all possible arrangements of a footwear product’s reference points, in order to automatically acquire the maximum arrangement factor. The user multiplies the pattern in order to find the economic arrangement for the reference points. In this purpose, the user must probe few arrangement variants, in the translation and rotate-translation system. The same process is used in establishing the arrangement factor for the two points of reference of the designed footwear product. After probing several variants of arrangement in the translation and rotation and translation systems, the maximum arrangement factors are chosen. This allows the user to estimate the material wastes.
Radiation Pressure Acceleration: the factors limiting maximum attainable ion energy
Bulanov, S S; Schroeder, C B; Bulanov, S V; Esirkepov, T Zh; Kando, M; Pegoraro, F; Leemans, W P
2016-01-01
Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it trans...
Separation energy dependence of hole form factors
Van de Wiele, J. [Institut de Physique Nucleaire, 91 - Orsay (France); Vdovin, A. [Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Langevin-Joliot, H. [Institut de Physique Nucleaire, 91 - Orsay (France)
1996-08-05
Form factors of fragmented hole states are studied within the quasiparticle-phonon model, using the inhomogeneous equation method. The validity of this method is successfully checked by comparison with coupled equation solutions in schematic vibrational model cases. A systematic investigation of form factors is performed for neutron and proton hole states in the valence and first inner shells of {sup 208}Pb. Large fluctuations of form factor radii are observed for individual levels superimposed on a general increase with separation energy. Average characteristics are introduced for groups of levels, namely the mean form factors, summed source terms and correction potentials, and their behaviour is presented. The role of the relative values of the interaction radius parameter and binding well radius is discussed in detail. (orig.).
Quasi Maximum Likelihood Analysis of High Dimensional Constrained Factor Models
Li, Kunpeng; Li,Qi; Lu, Lina
2016-01-01
Factor models have been widely used in practice. However, an undesirable feature of a high dimensional factor model is that the model has too many parameters. An effective way to address this issue, proposed in a seminar work by Tsai and Tsay (2010), is to decompose the loadings matrix by a high-dimensional known matrix multiplying with a low-dimensional unknown matrix, which Tsai and Tsay (2010) name constrained factor models. This paper investigates the estimation and inferential theory ...
A maximum entropy approach to separating noise from signal in bimodal affiliation networks
Dianati, Navid
2016-01-01
In practice, many empirical networks, including co-authorship and collocation networks are unimodal projections of a bipartite data structure where one layer represents entities, the second layer consists of a number of sets representing affiliations, attributes, groups, etc., and an inter-layer link indicates membership of an entity in a set. The edge weight in the unimodal projection, which we refer to as a co-occurrence network, counts the number of sets to which both end-nodes are linked. Interpreting such dense networks requires statistical analysis that takes into account the bipartite structure of the underlying data. Here we develop a statistical significance metric for such networks based on a maximum entropy null model which preserves both the frequency sequence of the individuals/entities and the size sequence of the sets. Solving the maximum entropy problem is reduced to solving a system of nonlinear equations for which fast algorithms exist, thus eliminating the need for expensive Monte-Carlo sam...
Separation energy dependence of hole form factors
Van de Wiele, J.; Langevin-Joliot, H. [Paris-11 Univ., 91 - Orsay (France). Inst. de Physique Nucleaire; Vdovin, A. [Joint Inst. for Nuclear Research, Dubna (Russian Federation). Bogoliubov Lab. of Theoretical Physics
1996-04-01
Form factors of fragmented hole states are studied within the quasiparticle-phonon model, using the inhomogeneous equation method. A systematic investigation of form factors is performed for neutron and proton hole states in the valence and first inner shells of {sup 208}Pb. Average characteristics are introduced for groups of levels, namely the mean form factors, summed source terms and correction potentials, and their behaviour is presented. The role of the relative values of the interaction radius parameter and binding well radius is discussed in details. (K.A.). 21 refs.; Submitted to Elsevier Science.
Kazi Takpaya; Wei Gang
2003-01-01
Blind identification-blind equalization for Finite Impulse Response (FIR) Multiple Input-Multiple Output (MIMO) channels can be reformulated as the problem of blind sources separation. It has been shown that blind identification via decorrelating sub-channels method could recover the input sources. The Blind Identification via Decorrelating Sub-channels(BIDS)algorithm first constructs a set of decorrelators, which decorrelate the output signals of subchannels, and then estimates the channel matrix using the transfer functions of the decorrelators and finally recovers the input signal using the estimated channel matrix. In this paper, a new approximation of the input source for FIR-MIMO channels based on the maximum likelihood source separation method is proposed. The proposed method outperforms BIDS in the presence of additive white Gaussian noise.
AaziTakpaya; WeiGang
2003-01-01
Blind identification-blind equalization for finite Impulse Response(FIR)Multiple Input-Multiple Output(MIMO)channels can be reformulated as the problem of blind sources separation.It has been shown that blind identification via decorrelating sub-channels method could recover the input sources.The Blind Identification via Decorrelating Sub-channels(BIDS)algorithm first constructs a set of decorrelators,which decorrelate the output signals of subchannels,and then estimates the channel matrix using the transfer functions of the decorrelators and finally recovers the input signal using the estimated channel matrix.In this paper,a new qpproximation of the input source for FIR-MIMO channels based on the maximum likelihood source separation method is proposed.The proposed method outperforms BIDS in the presence of additive white Garssian noise.
Computing the stretch factor and maximum detour of paths, trees, and cycles in the normed space
Wulff-Nilsen, Christian; Grüne, Ansgar; Klein, Rolf;
2012-01-01
The stretch factor and maximum detour of a graph G embedded in a metric space measure how well G approximates the minimum complete graph containing G and the metric space, respectively. In this paper we show that computing the stretch factor of a rectilinear path in L 1 plane has a lower bound of Ω......(n log n) in the algebraic computation tree model and describe a worst-case O(σn log 2 n) time algorithm for computing the stretch factor or maximum detour of a path embedded in the plane with a weighted fixed orientation metric defined by σ ... compute the stretch factor or maximum detour of trees and cycles in O(σn log d+1 n) time. We also obtain an optimal O(n) time algorithm for computing the maximum detour of a monotone rectilinear path in L 1 plane. © 2012 World Scientific...
Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM
Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman
2012-01-01
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…
Maximum Likelihood Estimation of Time-Varying Loadings in High-Dimensional Factor Models
Mikkelsen, Jakob Guldbæk; Hillebrand, Eric; Urga, Giovanni
In this paper, we develop a maximum likelihood estimator of time-varying loadings in high-dimensional factor models. We specify the loadings to evolve as stationary vector autoregressions (VAR) and show that consistent estimates of the loadings parameters can be obtained by a two-step maximum...... likelihood estimation procedure. In the first step, principal components are extracted from the data to form factor estimates. In the second step, the parameters of the loadings VARs are estimated as a set of univariate regression models with time-varying coefficients. We document the finite...
Factors influencing adoption of manure separation technology in The Netherlands.
Gebrezgabher, Solomie A; Meuwissen, Miranda P M; Kruseman, Gideon; Lakner, Dora; Oude Lansink, Alfons G J M
2015-03-01
Manure separation technologies are essential for sustainable livestock operations in areas with high livestock density as these technologies result in better utilization of manure and reduced environmental impact. Technologies for manure separation have been well researched and are ready for use. Their use, however, has been limited to the Netherlands. This paper investigates the role of farm and farmer characteristics and farmers' attitudes toward technology-specific attributes in influencing the likelihood of the adoption of mechanical manure separation technology. The analysis used survey data collected from 111 Dutch dairy farmers in 2009. The results showed that the age and education level of the farmer and farm size are important variables explaining the likelihood of adoption. In addition to farm and farmer characteristics, farmers' attitudes toward the different attributes of manure separation technology significantly affect the likelihood of adoption. The study generates useful information for policy makers, technology developers and distributors in identifying the factors that impact decision-making behaviors of farmers.
Maximum twin shear stress factor criterion for sliding mode fracture initiation
黎振兹; 李慧剑; 黎晓峰; 周洪彬; 郝圣旺
2002-01-01
Previous researches on the mixed mode fracture initiation criteria were mostly focused on opening mode fracture. In this study, the authors proposed a new criterion for mixed mode sliding fracture initiation, which is the maximum twin shear stress factor criterion. The authors studied a finite width plate with central slant crack, subject to a far-field uniform uniaxial tensile or compressive stress.
Wu Fuxian; Wen Weidong
2016-01-01
Classic maximum entropy quantile function method (CMEQFM) based on the probabil-ity weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the very small samples. To overcome this weakness, least square maximum entropy quantile function method (LSMEQFM) and that with constraint condition (LSMEQFMCC) are proposed. To improve the confidence level of quantile function estimation, scatter factor method is combined with maximum entropy method to estimate the confidence inter-val of quantile function. From the comparisons of these methods about two common probability distributions and one engineering application, it is showed that CMEQFM can estimate the quan-tile function accurately on the small samples but inaccurately on the very small samples (10 sam-ples); LSMEQFM and LSMEQFMCC can be successfully applied to the very small samples;with consideration of the constraint condition on quantile function, LSMEQFMCC is more stable and computationally accurate than LSMEQFM; scatter factor confidence interval estimation method based on LSMEQFM or LSMEQFMCC has good estimation accuracy on the confidence interval of quantile function, and that based on LSMEQFMCC is the most stable and accurate method on the very small samples (10 samples).
AlPOs Synthetic Factor Analysis Based on Maximum Weight and Minimum Redundancy Feature Selection
Yinghua Lv
2013-11-01
Full Text Available The relationship between synthetic factors and the resulting structures is critical for rational synthesis of zeolites and related microporous materials. In this paper, we develop a new feature selection method for synthetic factor analysis of (6,12-ring-containing microporous aluminophosphates (AlPOs. The proposed method is based on a maximum weight and minimum redundancy criterion. With the proposed method, we can select the feature subset in which the features are most relevant to the synthetic structure while the redundancy among these selected features is minimal. Based on the database of AlPO synthesis, we use (6,12-ring-containing AlPOs as the target class and incorporate 21 synthetic factors including gel composition, solvent and organic template to predict the formation of (6,12-ring-containing microporous aluminophosphates (AlPOs. From these 21 features, 12 selected features are deemed as the optimized features to distinguish (6,12-ring-containing AlPOs from other AlPOs without such rings. The prediction model achieves a classification accuracy rate of 91.12% using the optimal feature subset. Comprehensive experiments demonstrate the effectiveness of the proposed algorithm, and deep analysis is given for the synthetic factors selected by the proposed method.
Construction of 2Ⅲm-(m-κ) Designs with the Maximum Number of Clear Two-factor Interactions
Gui-Jun Yang
2007-01-01
It is useful to know the maximum number of clear two-factor interactions in a 2Ⅲm-(m-k) design.This paper provides a method to construct a 2Ⅲm-(m-k) design with the maximum number of clear two-factor interactions. And it is proved that the resulting designs have more clear two-factor interactions than those constructed by Tang et al.[6]. Moreover, the designs constructed are shown to have concise grid representations.
Kernel principal component and maximum autocorrelation factor analyses for change detection
Nielsen, Allan Aasbjerg; Canty, Morton John
2009-01-01
in Nevada acquired on successive passes of the Landsat-5 satellite in August-September 1991. The six-band images (the thermal band is omitted) with 1,000 by 1,000 28.5 m pixels were first processed with the iteratively re-weighted MAD (IR-MAD) algorithm in order to discriminate change. Then the MAD image......Principal component analysis (PCA) has often been used to detect change over time in remotely sensed images. A commonly used technique consists of finding the projections along the eigenvectors for data consisting of pair-wise (perhaps generalized) differences between corresponding spectral bands...... covering the same geographical region acquired at two different time points. In this paper kernel versions of the principal component and maximum autocorrelation factor (MAF) transformations are used to carry out the analysis. An example is based on bi-temporal Landsat-5 TM imagery over irrigation fields...
Weak minimum aberration and maximum number of clear two-factor interactions in 2
YANG; Guijun
2005-01-01
[1]Wu, C. F. J., Chen, Y., A graph-aided method for planning two-level experiments when certain interactions are important, Technometrics, 1992, 34: 162-175.[2]Fries, A., Hunter, W, G., Minimum aberration 2к-p designs, Technometrics, 1980, 22: 601-608.[3]Chen, H., Hedayat, A. S., 2n-l designs with weak minimum aberration, Ann. Statist., 1996, 24: 2536-2548.[4]Chen, J., Some results on 2n-к fractional factorial designs and search for minimum aberration designs, Ann.Statist., 1992, 20: 2124-2141.[5]Chen, J., Intelligent search for 213-6 and 214-7 minimum aberration designs, Statist. Sinica, 1998, 8: 1265-1270.[6]Chen, J., Sun, D. X., Wu, C. F. J., A catalogue of two-level and three-level fractional factorial designs with small runs, Internat. Statist. Rev., 1993, 61: 131-145.[7]Chen, J., Wu, C. F. J., Some results on 2n-к fractional factorial designs with minimum aberration or optimal moments, Ann. Statist., 1991, 19: 1028-1041.[8]Cheng, C. S., Mukerjee, R., Regular fractional factorial designs with minimum aberration and maximum estimation capacity, Ann. Statist., 1998, 26: 2289-2300.[9]Cheng, C. S., Steinberg, D. M., Sun, D. X., Minimum aberration and model robustness for two-level fractional factorial designs, J. Roy. Statist. Soc. Ser. B, 1999, 61: 85-93.[10]Draper, N. R., Lin, D. K. J., Capacity consideration for two-level fractional factorial designs, J. Statist. Plann.Inference, 1990, 24: 25-35.[11]Fang, K. T., Mukerjee, R., A connection between uniformity and aberration in regular fractions of two-level factorial, Biometrika, 2000, 87: 193-198.[12]Tang, B., Wu, C. F. J., Characterization of minimum aberration 2n-к designs in terms of their complementary designs, Ann. Statist., 1996, 24: 2549-2559.[13]Chen, H., Hedayat, A. S., 2n-m designs with resolution Ⅲ or Ⅳ containing clear two-factor interactions, J.Statist. Plann. Inference, 1998, 75: 147-158.[14]Tang, B., Ma, F., Ingram, D., Wang, H., Bounds on the maximum numbers of clear two factor
Fulgueras, Alyssa Marie; Poudel, Jeeban; Kim, Dong Sun; Cho, Jungho [Kongju National University, Cheonan (Korea, Republic of)
2016-01-15
The separation of ethylenediamine (EDA) from aqueous solution is a challenging problem because its mixture forms an azeotrope. Pressure-swing distillation (PSD) as a method of separating azeotropic mixture were investigated. For a maximum-boiling azeotropic system, pressure change does not greatly affect the azeotropic composition of the system. However, the feasibility of using PSD was still analyzed through process simulation. Experimental vapor liquid equilibrium data of water-EDA system was studied to predict the suitability of thermodynamic model to be applied. This study performed an optimization of design parameters for each distillation column. Different combinations of operating pressures for the low- and high-pressure columns were used for each PSD simulation case. After the most efficient operating pressures were identified, two column configurations, low-high (LP+HP) and high-low (HP+ LP) pressure column configuration, were further compared. Heat integration was applied to PSD system to reduce low and high temperature utility consumption.
Shifted factor analysis for the separation of evoked dependent MEG signals
Kohl, F; Wuebbeler, G; Baer, M; Elster, C [Physikalisch-Technische Bundesanstalt (PTB), Abbestrasse 2-12, 10587 Berlin (Germany); Kolossa, D; Orglmeister, R, E-mail: florian.kohl@ptb.d [Technische Universitaet Berlin, Strasse des 17. Juni 135, 10623 Berlin (Germany)
2010-08-07
Decomposition of evoked magnetoencephalography (MEG) data into their underlying neuronal signals is an important step in the interpretation of these measurements. Often, independent component analysis (ICA) is employed for this purpose. However, ICA can fail as for evoked MEG data the neuronal signals may not be statistically independent. We therefore consider an alternative approach based on the recently proposed shifted factor analysis model, which does not assume statistical independence of the neuronal signals. We suggest the application of this model in the time domain and present an estimation procedure based on a Taylor series expansion. We show in terms of synthetic evoked MEG data that the proposed procedure can successfully separate evoked dependent neuronal signals while standard ICA fails. Latency estimation of neuronal signals is an inherent part of the proposed procedure and we demonstrate that resulting latency estimates are superior to those obtained by a maximum likelihood method.
Key factors of eddy current separation for recovering aluminum from crushed e-waste.
Ruan, Jujun; Dong, Lipeng; Zheng, Jie; Zhang, Tao; Huang, Mingzhi; Xu, Zhenming
2017-02-01
Recovery of e-waste in China had caused serious pollutions. Eddy current separation is an environment-friendly technology of separating nonferrous metallic particles from crushed e-waste. However, due to complex particle characters, separation efficiency of traditional eddy current separator was low. In production, controllable operation factors of eddy current separation are feeding speed, (ωR-v), and Sp. There is little special information about influencing mechanism and critical parameters of these factors in eddy current separation. This paper provided the special information of these key factors in eddy current separation of recovering aluminum particles from crushed waste refrigerator cabinets. Detachment angles increased as the increase of (ωR-v). Separation efficiency increased with the growing of detachment angles. Aluminum particles were completely separated from plastic particles in critical parameters of feeding speed 0.5m/s and detachment angles greater than 6.61deg. Sp/Sm of aluminum particles in crushed waste refrigerators ranged from 0.08 to 0.51. Separation efficiency increased as the increase of Sp/Sm. This enlightened us to develop new separator to separate smaller nonferrous metallic particles in e-waste recovery. High feeding speed destroyed separation efficiency. However, greater Sp of aluminum particles brought positive impact on separation efficiency. Greater Sp could increase critical feeding speed to offer greater throughput of eddy current separation. This paper will guide eddy current separation in production of recovering nonferrous metals from crushed e-waste.
Da Costa, M J; Colson, G; Frost, T J; Halley, J; Pesti, G M
2017-09-01
The objective of this experiment was to determine the maximum net returns digestible lysine (dLys) levels (MNRL) when maintaining the ideal amino acid ratio for starter diets of broilers raised sex separate or comingled (straight-run). A total of 3,240 Ross 708 chicks was separated by sex and placed in 90 pens by 2 rearing types: sex separate (36 males or 36 females) or straight-run (18 males + 18 females). Each rearing type was fed 6 starter diets (25 d) formulated to have dLys levels between 1.05 and 1.80%. A common grower diet with 1.02% of dLys was fed from 25 to 32 days. Body weight gain (BWG) and feed intake were assessed at 25 and 32 d for performance evaluation. Additionally, at 26 and 33 d, 4 birds per pen were sampled for carcass yield evaluation. Data were modeled using response surface methodology in order to estimate feed intake and whole carcass weight at 1,600 g live BW. Returns over feed cost were estimated for a 1.8-million-broiler complex of each rearing system under 9 feed/meat price scenarios. Results indicated that females needed more feed to reach market weight, followed by straight-run birds, and then males. At medium meat and feed prices, female birds had MNRL at 1.07% dLys, whereas straight-run and males had MNRL at 1.05%. As feed and meat prices increased, females had MNRL increased up to 1.15% dLys. Sex separation resulted in increased revenue under certain feed and meat prices, and before sex separation cost was deducted. When the sexing cost was subtracted from the returns, sex separation was not shown to be economically viable when targeting birds for light market BW. © 2017 Poultry Science Association Inc.
Oort, van I.M.; Witjes, J.A.; Kok, D.E.G.; Kiemeney, L.A.; Hulsbergen-van de Kaa, C.A.
2008-01-01
Previous studies suggest that maximum tumor diameter (MTD) is a predictor of recurrence in prostate cancer (PC). This study investigates the prognostic value of MTD for biochemical recurrence (BCR) in patients with PC, after radical prostatectomy (RP), with emphasis on high-risk localized prostate c
Oort, I.M. van; Witjes, J.A.M.; Kok, D.E.; Kiemeney, L.A.L.M.; Hulsbergen- van de Kaa, C.A.
2008-01-01
OBJECTIVES: Previous studies suggest that maximum tumor diameter (MTD) is a predictor of recurrence in prostate cancer (PC). This study investigates the prognostic value of MTD for biochemical recurrence (BCR) in patients with PC, after radical prostatectomy (RP), with emphasis on high-risk localize
P. Heydari
2016-02-01
Full Text Available Background: The maximum aerobic capacity (VO2max can be used to evaluate the cardio-pulmonary condition and to provide physiological balance between a person and his job. Objectives: The aim of this study was to estimate the maximum aerobic capacity and its associated factors among students of medical emergencies in Qazvin. Methods: This cross-sectional study was conducted in 36 male students of medical emergencies in Qazvin University of Medical Sciences, 2015. The Physical Activity Readiness Questionnaire (PAR-Q and demographic questionnaire were completed by the participants. The participants meeting the inclusion criteria were assessed using the Gerkin treadmill protocol. Data were analyzed using Mann-Whitney U test and Kruskal-Wallis. Findings: Mean maximum aerobic capacity was 1.94±0.27 L/min. The maximum aerobic capacity was associated with weight and height groups. There was significant positive correlation between maximal aerobic capacity and height, weight and body mass index. Conclusion: The Gerkin treadmill test is useful for estimation of the maximum aerobic capacity and the maximum working ability in students of medical emergencies.
Lupiani Castellanos, J.; Quinones Rodriguez, L. A.; Richarte Reina, J. M.; Ramos Caballero, L. J.; Angulo Pain, E.; Castro Ramierez, I. J.; Iborra Oquendo, M. A.; Urena Llinares, A.
2011-07-01
The ESTRO Booklet 6 gives the numerical data collected in four different sizes and different accelerators for different beam qualities. Although the end of this guide is the calculation and verification of monitor units, the data we have used Siemens Primus accelerator Mevatron 6 MV photons to perform quality control of the experimental measurements for the tissue-maximum ratio (TMR) and the output factor (OF) in air yen dummy.
Love of life and death distress: two separate factors.
Abdel-Khalek, Ahmed M
2007-01-01
The objectives of the current investigation were threefold: a) to explore the gender differences on love of life (a new construct in the well-being domain) and death distress (death anxiety, death depression, and death obsession); b) to explore the relationship between the scales of these constructs; and c) to examine the factorial structure of these scales. The sample was 245 volunteer Kuwaiti college students (53.5% women). Their mean age was 21.9 (SD = 2.3). They responded to the Love of Life Scale, the Death Anxiety Scale, the Arabic Scale of Death Anxiety, the Death Depression Scale-Revised, and the Death Obsession Scale. Gender differences on love of life were not significant. However, women had significantly higher mean scores for the four death distress scales than did their male counterparts. All the correlations between love of life and the death distress scales were not significant except one pertaining to love of life and death depression (negative) in women. Two oblique factors were extracted: death distress and love of life. It was concluded that these constructs represent two distinct and independent factors. Counselors and clinicians dealing with death distress would find that it is not associated with love of life.
Empirical Study on Factors Influencing Residents' Behavior of Separating Household Wastes at Source
Qu Ying; Zhu Qinghua; Murray Haight
2007-01-01
Source separation is the basic premise for making effective use of household wastes. In eight cities of China, however, several pilot projects of source separation finally failed because of the poor participation rate of residents. In order to solve this problem, identifying those factors that influence residents' behavior of source separation becomes crucial. By means of questionnaire survey, we conducted descriptive analysis and exploratory factor analysis. The results show that trouble-feeling, moral notion, environment protection, public education, environment value and knowledge deficiency are the main factors that play an important role for residents in deciding to separate their household wastes. Also, according to the contribution percentage of the six main factors to the total behavior of source separation, their influencing power is analyzed, which will provide suggestions on household waste management for policy makers and decision makers in China.
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation…
Can, Seda; van de Schoot, Rens; Hox, Joop
2014-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influ
Hewett, Timothy E; Webster, Kate E; Hurd, Wendy J
2017-08-16
The evolution of clinical practice and medical technology has yielded an increasing number of clinical measures and tests to assess a patient's progression and return to sport readiness after injury. The plethora of available tests may be burdensome to clinicians in the absence of evidence that demonstrates the utility of a given measurement. Thus, there is a critical need to identify a discrete number of metrics to capture during clinical assessment to effectively and concisely guide patient care. The data sources included Pubmed and PMC Pubmed Central articles on the topic. Therefore, we present a systematic approach to injury risk analyses and how this concept may be used in algorithms for risk analyses for primary anterior cruciate ligament (ACL) injury in healthy athletes and patients after ACL reconstruction. In this article, we present the five-factor maximum model, which states that in any predictive model, a maximum of 5 variables will contribute in a meaningful manner to any risk factor analysis. We demonstrate how this model already exists for prevention of primary ACL injury, how this model may guide development of the second ACL injury risk analysis, and how the five-factor maximum model may be applied across the injury spectrum for development of the injury risk analysis.
Maximum likelihood estimation in constrained parameter spaces for mixtures of factor analyzers
Greselin, Francesca; Ingrassia, Salvatore
2013-01-01
Mixtures of factor analyzers are becoming more and more popular in the area of model based clustering of high-dimensional data. According to the likelihood approach in data modeling, it is well known that the unconstrained log-likelihood function may present spurious maxima and singularities and this is due to specific patterns of the estimated covariance structure, when their determinant approaches 0. To reduce such drawbacks, in this paper we introduce a procedure for the parameter estimati...
Impact maturity times and citation time windows: The 2-year maximum journal impact factor
Dorta-Gonzalez, Pablo
2013-01-01
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIF) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behaviour across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are...
L. T. Murray
2013-09-01
Full Text Available The oxidative capacity of past atmospheres is highly uncertain. We present here a new climate-biosphere-chemistry modeling framework to determine oxidant levels in the present and past troposphere. We use the GEOS-Chem chemical transport model driven by meteorological fields from the NASA Goddard Institute of Space Studies (GISS ModelE, with land cover and fire emissions from dynamic global vegetation models. We present time-slice simulations for the present day, late preindustrial (AD 1770, and the Last Glacial Maximum (LGM; 19–23 ka, and we test the sensitivity of model results to uncertainty in lightning and fire emissions. We find that most preindustrial and paleo climate simulations yield reduced oxidant levels relative to the present day. Contrary to prior studies, tropospheric mean OH in our ensemble shows little change at the LGM relative to the preindustrial (0.5 ± 12%, despite large reductions in methane concentrations. We find a simple linear relationship between tropospheric mean ozone photolysis rates, water vapor, and total emissions of NOx and reactive carbon that explains 72% of the variability in global mean OH in 11 different simulations across the last glacial-interglacial time interval and the Industrial Era. Key parameters controlling the tropospheric oxidative capacity over glacial-interglacial periods include overhead stratospheric ozone, tropospheric water vapor, and lightning NOx emissions. Variability in global mean OH since the LGM is insensitive to fire emissions. Our simulations are broadly consistent with ice-core records of Δ17O in sulfate and nitrate at the LGM, and CO, HCHO, and H2O2 in the preindustrial. Our results imply that the glacial-interglacial changes in atmospheric methane observed in ice cores are predominantly driven by changes in its sources as opposed to its sink with OH.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Single-channel source separation using non-negative matrix factorization
Schmidt, Mikkel Nørgaard
, in which a number of methods for single-channel source separation based on non-negative matrix factorization are presented. In the papers, the methods are applied to separating audio signals such as speech and musical instruments and separating different types of tissue in chemical shift imaging.......Single-channel source separation problems occur when a number of sources emit signals that are mixed and recorded by a single sensor, and we are interested in estimating the original source signals based on the recorded mixture. This problem, which occurs in many sciences, is inherently under......-determined and its solution relies on making appropriate assumptions concerning the sources. This dissertation is concerned with model-based probabilistic single-channel source separation based on non-negative matrix factorization, and consists of two parts: i) three introductory chapters and ii) five published...
Factor Analysis of Wildfire and Risk Area Estimation in Korean Peninsula Using Maximum Entropy
Kim, Teayeon; Lim, Chul-Hee; Lee, Woo-Kyun; Kim, YouSeung; Heo, Seongbong; Cha, Sung Eun; Kim, Seajin
2016-04-01
The number of wildfires and accompanying human injuries and physical damages has been increased by frequent drought. Especially, Korea experienced severe drought and numbers of wildfire took effect this year. We used MaxEnt model to figure out major environmental factors for wildfire and used RCP scenarios to predict future wildfire risk area. In this study, environmental variables including topographic, anthropogenic, meteorologic data was used to figure out contributing variables of wildfire in South and North Korea, and compared accordingly. As for occurrence data, we used MODIS fire data after verification. In North Korea, AUC(Area Under the ROC Curve) value was 0.890 which was high enough to explain the distribution of wildfires. South Korea had low AUC value than North Korea and high mean standard deviation which means there is low anticipation to predict fire with same environmental variables. It is expected to enhance AUC value in South Korea with environmental variables such as distance from trails, wildfire management systems. For instance, fire occurred within DMZ(demilitarized zone, 4kms boundary from 38th parallel) has decisive influence on fire risk area in South Korea, but not in North Korea. The contribution of each environmental variables was more distributed among variables in North Korea than in South Korea. This means South Korea is dependent on few certain variables, and North Korea can be explained as number of variables with evenly distributed portions. Although the AUC value and standard deviation of South Korea was not high enough to predict wildfire, the result carries an significant meaning to figure out scientific and social matters that certain environmental variables has great weight by understanding their response curves. We also made future wildfire risk area map in whole Korean peninsula using the same model. In four RCP scenarios, it was found that severe climate change would lead wildfire risk area move north. Especially North
Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation
Schmidt, Mikkel N.; Mørup, Morten
2006-01-01
We present a novel method for blind separation of instruments in polyphonic music based on a non-negative matrix factor 2-D deconvolution algorithm. Using a model which is convolutive in both time and frequency we factorize a spectrogram representation of music into components corresponding to in...... to individual instruments. Based on this factorization we separate the instruments using spectrogram masking. The proposed algorithm has applications in computational auditory scene analysis, music information retrieval, and automatic music transcription.......We present a novel method for blind separation of instruments in polyphonic music based on a non-negative matrix factor 2-D deconvolution algorithm. Using a model which is convolutive in both time and frequency we factorize a spectrogram representation of music into components corresponding...
Taeseong Woo
2017-05-01
Full Text Available A quantitative diagnosis using magnetic resonance imaging (MRI can be disturbed by radiofrequency (RF field inhomogeneity induced by the conductive implants. This inhomogeneity causes a local decrease of the signal intensity around the conductor, resulting in a deterioration of the accurate quantification. In a previous study, we developed an MRI imaging method using a two-dimensional selective excitation pulse (2D pulse to mitigate signal inhomogeneity induced by metallic implants. In this paper, the effect of 2D pulse was evaluated quantitatively by numerical simulation and MRI experiments. We introduced two factors for evaluation, spatial resolution and maximum compensation factor. Numerical simulations were performed with two groups. One group was composed of four models with different signal loss width, to evaluate the spatial resolution of the 2D pulse. The other group is also composed of four models with different amounts of signal loss for evaluating maximum compensation factor. In MRI experiments, we prepared phantoms containing conductors, which have different electrical conductivities related with the amounts of signal intensity decrease. The recovery of signal intensity was observed by 2D pulses, in both numerical simulations and experiments.
BOUND ON THE MAXIMUM NUMBER OF CLEAR TWO-FACTOR INTERACTIONS FOR 2n-(n-k) DESIGNS
Zhao Shengli; Zhang Runchu
2008-01-01
Clear effects criterion is an important criterion for selecting fractional factorial designs [1]. Tang et al. [2] derived upper and lower bounds on the maximum number of clear two-factor interactions (2fi's) in 2n-(n-k) designs of resolution Ⅲ and Ⅳ by constructing 2n-(n-k) designs. But the method in [2] does not perform well sometimes when the resolution is III. This article modifies the construction method for 2n-(n-k) designs of resolution Ⅲ in [2]. The modified method is a great improvement on that used in [2].
Larsen, Anna Warberg; Astrup, Thomas
2011-01-01
CO2-loads from combustible waste are important inputs for national CO2 inventories and life-cycle assessments (LCA). CO2 emissions from waste incinerators are often expressed by emission factors in kg fossil CO2 emitted per GJ energy content of the waste. Various studies have shown considerable...... variations between emission factors for different incinerators, but the background for these variations has not been thoroughly examined. One important reason may be variations in collection of recyclable materials as source separation alters the composition of the residual waste incinerated. The objective...... of this study was to quantify the importance of source separation for determination of emission factors for incineration of residual household waste. This was done by mimicking various source separation scenarios and based on waste composition data calculating resulting emission factors for residual waste...
Weak minimum aberration and maximum number of clear two-factor interactions in 2m-p Ⅳ designs
YANG Guijun; LIU Minqian; ZHANG Runchu
2005-01-01
Both the clear effects and minimum aberration criteria are the important rules for the design selection. In this paper, it is proved that some 2m-p Ⅳ designs have weak minimum aberration, by considering the number of clear two-factor interactions in the designs.And some conditions are provided, under which a 2m-p Ⅳ design can have the maximum number of clear two-factor interactions and weak minimum aberration at the same time.Some weak minimum aberration 2m-p Ⅳ designs are provided for illustrations and two nonisomorphic weak minimum aberration 213-6 Ⅳ designs are constructed at the end of this paper.
Functional Maximum Autocorrelation Factors
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in\\verb+~+\\$\\backsl......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in...
A PROSPECTIVE STUDY OF FACTORS INFLUENCING THE TIME OF SEPARATION OF UMBILICAL CORD
Lokesha
2014-12-01
Full Text Available : BACKGROUND: The umbilical cord usually shrivels and falls off at around 5 to 15days of life. It is important to know timing of separation, so that parents may be advised on proper cord care and ally any misconceptions about the cord separation; early discharge has increased the need for women to receive accurate, relevant information on how to care for themselves and their newborns when discharged from the hospital. Delay in separation of the umbilical cord, umbilical discharge, odor, or granuloma causes concern and source of immense anxiety for parents, the interval between delivery and umbilical cord separation varies worldwide, this study is undertaken to determine time of separation and factors influencing the separation. OBJECTIVES: To determine time of separation of umbilical cord and factors influencing it. METHODS: SETTINGS: Babies admitted at a tertiary hospital, selected by purposive sampling technique. For each recruited baby, data is obtained about mother's parity, mode of delivery, gestational age, birth weight, and gender of baby, method of resuscitation, phototherapy, IV antibiotics, and cord blood TSH values, time of umbilical cord separation after birth. Newborns whose umbilical cord shriveled off during the stay in the hospital information were obtained directly, and a self-addressed postcard was given to parents of newborns whose umbilical cord was intact at time of discharge. Parents would be advised to note and write the date of fall of umbilical cord on post card and mail it. RESULTS: Cord separation time ranges from 3 to 11 days, with mean separation of 5.62 ± 2.37 days, it is one to two days earlier as compared to previous studies, seventy nine (79 of hundred and ten (110 separated between 5-7 days (71%, one baby had separation at 11days, babies who received antibiotics had statistically significant delay in separation time of umbilical, neonates received antibiotics had mean separation time (6 ± 2.4 days as compared
Parental Separation and Cardiometabolic Risk Factors in Late Adolescence: A Cross-Cohort Comparison.
Soares, Ana Luiza Gonçalves; Gonçalves, Helen; Matijasevich, Alicia; Sequeira, Maija; Smith, George Davey; Menezes, Ana M B; Assunção, Maria Cecília; Wehrmeister, Fernando C; Fraser, Abigail; Howe, Laura D
2017-05-15
The aim of this study was to explore the association between parental separation during childhood (up to 18 years of age) and cardiometabolic risk factors (body mass index, fat mass index, blood pressure, physical activity, smoking, and alcohol consumption) in late adolescence using a cross-cohort comparison and to explore whether associations differ according to the age at which the parental separation occurred and the presence or absence of parental conflict prior to separation. Data from the Avon Longitudinal Study of Parents and Children (ALSPAC, United Kingdom) (1991-2011) and the 1993 Pelotas Birth Cohort (Brazil) (1993-2011) were used. The associations of parental separation with children's cardiometabolic risk factors were largely null. Higher odds of daily smoking were observed in both cohorts for those adolescents whose parents separated (for ALSPAC, odds ratio = 1.46; for Pelotas Birth Cohort, odds ratio = 1.98). Some additional associations were observed in the Pelotas Birth Cohort but were generally in the opposite direction to our a priori hypothesis: Parental separation was associated with lower blood pressure and fat mass index, and with more physical activity. No consistent differences were observed when analyses were stratified by child's age at parental separation or parental conflict. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
Luo, Hong; Ma, You-xin; Liu, Wen-jun; Li, Hong-mei
2010-05-01
By using maximum upstream flow path, a self-developed new method for calculating slope length value based on Arc Macro Language (AML), five groups of DEM data for different regions in Bijie Prefecture of Guizhou Province were extracted to compute the slope length and topographical factors in the Prefecture. The time cost for calculating the slope length and the values of the topographical factors were analyzed, and compared with those by iterative slope length method based on AML (ISLA) and on C++ (ISLC). The results showed that the new method was feasible to calculate the slope length and topographical factors in revised universal soil loss model, and had the same effect as iterative slope length method. Comparing with ISLA, the new method had a high computing efficiency and greatly decreased the time consumption, and could be applied to a large area to estimate the slope length and topographical factors based on AML. Comparing with ISLC, the new method had the similar computing efficiency, but its coding was easily to be written, modified, and debugged by using AML. Therefore, the new method could be more broadly used by GIS users.
Zhou, Guoxu; Yang, Zuyuan; Xie, Shengli; Yang, Jun-Mei
2011-04-01
Online blind source separation (BSS) is proposed to overcome the high computational cost problem, which limits the practical applications of traditional batch BSS algorithms. However, the existing online BSS methods are mainly used to separate independent or uncorrelated sources. Recently, nonnegative matrix factorization (NMF) shows great potential to separate the correlative sources, where some constraints are often imposed to overcome the non-uniqueness of the factorization. In this paper, an incremental NMF with volume constraint is derived and utilized for solving online BSS. The volume constraint to the mixing matrix enhances the identifiability of the sources, while the incremental learning mode reduces the computational cost. The proposed method takes advantage of the natural gradient based multiplication updating rule, and it performs especially well in the recovery of dependent sources. Simulations in BSS for dual-energy X-ray images, online encrypted speech signals, and high correlative face images show the validity of the proposed method.
A factor analytic and psychometric examination of pathology of separation-individuation.
Lapsley, D K; Aalsma, M C; Varshney, N M
2001-07-01
Two studies are described that attempt to determine if standard-scale-reduction techniques could yield a construct-valid diagnostic screen of pathology of separation-individuation for use in nonclinical university settings. In Study 1 (N = 210), a measure of pathology of separation-individuation (PATHSEP) was reduced successfully to a single, internally consistent factor, accounting for 36% of the variance. In Study 2 (N = 304), these items also coalesced around a single factor, accounting for 35% of the variance. Study 2 also showed that PATHSEP is correlated moderately and positively with indices of insecure attachment, with the Center for Epidemiological Studies-Depression Scale, and with indices of psychiatric symptomatology (Hopkins Symptom Checklist). PATHSEP also was associated with a poorer profile of adjustment to college. Males reported more pathology of separation-individuation than did females. Evidence supports the construct validity of a shortened version of PATHSEP. Directions for future research are noted. Copyright 2001 John Wiley & Sons, Inc.
Montcuquet, Anne-Sophie; Hervé, Lionel; Navarro, Fabrice; Dinten, Jean-Marc; Mars, Jérôme I
2010-01-01
Fluorescence imaging in diffusive media is an emerging imaging modality for medical applications that uses injected fluorescent markers that bind to specific targets, e.g., carcinoma. The region of interest is illuminated with near-IR light and the emitted back fluorescence is analyzed to localize the fluorescence sources. To investigate a thick medium, as the fluorescence signal decreases with the light travel distance, any disturbing signal, such as biological tissues intrinsic fluorescence (called autofluorescence) is a limiting factor. Several specific markers may also be simultaneously injected to bind to different molecules, and one may want to isolate each specific fluorescent signal from the others. To remove the unwanted fluorescence contributions or separate different specific markers, a spectroscopic approach is explored. The nonnegative matrix factorization (NMF) is the blind positive source separation method we chose. We run an original regularized NMF algorithm we developed on experimental data, and successfully obtain separated in vivo fluorescence spectra.
Blind source separation of fMRI data by means of factor analytic transformations
Langers, Dave R. M.
2009-01-01
In this study, the application of factor analytic (FA) rotation methods in the context of neuroimaging data analysis was explored. Three FA algorithms (ProMax, QuartiMax, and VariMax) were employed to carry out blind source separation in a functional magnetic resonance imaging (fMRI) experiment that
Blind source separation of fMRI data by means of factor analytic transformations
Langers, Dave R. M.
2009-01-01
In this study, the application of factor analytic (FA) rotation methods in the context of neuroimaging data analysis was explored. Three FA algorithms (ProMax, QuartiMax, and VariMax) were employed to carry out blind source separation in a functional magnetic resonance imaging (fMRI) experiment that
Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang
2014-05-01
Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.
Single-Channel Speech Separation using Sparse Non-Negative Matrix Factorization
Schmidt, Mikkel N.; Olsson, Rasmus Kongsgaard
2007-01-01
We apply machine learning techniques to the problem of separating multiple speech sources from a single microphone recording. The method of choice is a sparse non-negative matrix factorization algorithm, which in an unsupervised manner can learn sparse representations of the data. This is applied...... to the learning of personalized dictionaries from a speech corpus, which in turn are used to separate the audio stream into its components. We show that computational savings can be achieved by segmenting the training data on a phoneme level. To split the data, a conventional speech recognizer is used...
Niccoli, G.
2013-05-01
The antiperiodic transfer matrices associated to higher spin representations of the rational 6-vertex Yang-Baxter algebra are analyzed by generalizing the approach introduced recently in the framework of Sklyanin's quantum separation of variables (SOV) for cyclic representations, spin-1/2 highest weight representations, and also for spin-1/2 representations of the 6-vertex reflection algebra. Such SOV approach allow us to derive exactly results which represent complicate tasks for more traditional methods based on Bethe ansatz and Baxter Q-operator. In particular, we both prove the completeness of the SOV characterization of the transfer matrix spectrum and its simplicity. Then, the derived characterization of local operators by Sklyanin's quantum separate variables and the expression of the scalar products of separate states by determinant formulae allow us to compute the form factors of the local spin operators by one determinant formulae similar to those of the scalar products.
Park, Sang Ha; Lee, Seokjin; Sung, Koeng-Mo
Non-negative matrix factorization (NMF) is widely used for monaural musical sound source separation because of its efficiency and good performance. However, an additional clustering process is required because the musical sound mixture is separated into more signals than the number of musical tracks during NMF separation. In the conventional method, manual clustering or training-based clustering is performed with an additional learning process. Recently, a clustering algorithm based on the mel-frequency cepstrum coefficient (MFCC) was proposed for unsupervised clustering. However, MFCC clustering supplies limited information for clustering. In this paper, we propose various timbre features for unsupervised clustering and a clustering algorithm with these features. Simulation experiments are carried out using various musical sound mixtures. The results indicate that the proposed method improves clustering performance, as compared to conventional MFCC-based clustering.
Han, Xianlin; Yang, Kui; Yang, Jingyue; Fikes, Kora N; Cheng, Hua; Gross, Richard W
2006-02-01
The external electric field induces a separation of cations from negative electrolyte ions in the infusate while differential ionization of molecular species that possess differential electrical propensities can be induced in either the positive- or negative-ion mode during the electrospray ionization process. These physical and electrical processes that occur in the electrospray ion source have been used to selectively ionize lipid classes possessing different electrical propensities that are now known as "intrasource separation and selective ionization". However, the chemical principles underlying charge-dependent alterations in ionization efficiencies responsible for the selective ionization of lipid classes are not known with certainty. Herein, we examined the multiple factors that contribute to intrasource separation and selective ionization of lipid classes under optimal instrumental conditions. We demonstrated that many different lipid classes could be selectively ionized in the ion source and that intrasource resolution of distinct molecular constituents was independent of lipid concentration, flow rate, and residual ions under most experimental conditions. Moreover, the presence of alkaline conditions facilitates the selective ionization of many lipid classes through a mechanism independent of the design of the ESI ion source. Collectively, this study provides an empirical foundation for understanding the chemical mechanisms underlying intrasource separation and selective ionization of lipid classes that can potentially be used for global analysis of cellular lipidomes without the need for chromatographic separation.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Ortiz, T.M.
1998-05-01
The range of available data on separation factors in the palladium-hydrogen/deuterium system has been extended. A matched pair of glass-coated bead thermistors was used to measure gas phase compositions. The compositions of the input gas--assumed also to be the solid phase composition--were measured independently be mass spectrometry as being within 0.5 mole% of the values used to calibrate the thermistors. This assumption is based on the fact that > 99% of the input gas is absorbed into the solid. Separation factors were measured for 175 K {le} T {le} 389 K and for 0.195 {le} x{sub H} {le} 0.785.
Zhujie Chu
2016-02-01
Full Text Available Municipal household solid waste (MHSW has become a serious problem in China over the course of the last two decades, resulting in significant side effects to the environment. Therefore, effective management of MHSW has attracted wide attention from both researchers and practitioners. Separate collection, the first and crucial step to solve the MHSW problem, however, has not been thoroughly studied to date. An empirical survey has been conducted among 387 households in Harbin, China in this study. We use Bayesian Belief Networks model to determine the influencing factors on separate collection. Four types of factors are identified, including political, economic, social cultural and technological based on the PEST (political, economic, social and technological analytical method. In addition, we further analyze the influential power of different factors, based on the network structure and probability changes obtained by Netica software. Results indicate that technological dimension has the greatest impact on MHSW separate collection, followed by the political dimension and economic dimension; social cultural dimension impacts MHSW the least.
Separable sustained and selective attention factors are apparent in 5-year-old children
Underbjerg, Mette; George, Melanie S; Thorsen, Poul
2013-01-01
In adults and older children, evidence consistent with relative separation between selective and sustained attention, superimposed upon generally positive inter-test correlations, has been reported. Here we examine whether this pattern is detectable in 5-year-old children from the healthy...... and auditory stimuli were good. In a factor analysis, the two TEA-Ch(J) selective attention tasks (one visual, one auditory) loaded onto a common factor and diverged from the two sustained attention tasks (one auditory, one motor), which shared a common loading on the second factor. This pattern, which...... suggests that the tests are indeed sensitive to underlying attentional capacities, was supported by the relationships between the TEA-Ch(J) factors and Test of Everyday Attention for Children subtests in the older children in the sample. It is possible to gain convincing performance-based estimates...
Sennikov, S V; Golikova, E A; Kireev, F D; Lopatnikova, J A
2013-04-30
Autoantibodies to cytokines are important biological effector molecules that can regulate cytokine activities. The aim of the study was to develop a protocol to purify autoantibodies to tumor necrosis factor from human serum, for use as a calibration material to determine the absolute content of autoantibodies to tumor necrosis factor by enzyme-linked immunosorbent assay. The proposed protocol includes a set of affinity chromatography methods, namely, Bio-Gel P6DG sorbent to remove albumin from serum, Protein G Sepharose 4 Fast Flow to obtain a total immunoglobulin G fraction of serum immunoglobulins, and Affi-Gel 15 to obtain specifically antibodies to tumor necrosis factor. The addition of a magnetic separation procedure to the protocol eliminated contaminant tumor necrosis factor from the fraction of autoantibodies to tumor necrosis factor. The protocol generated a pure fraction of autoantibodies to tumor necrosis factor, and enabled us to determine the absolute concentrations of different subclasses of immunoglobulin G autoantibodies to tumor necrosis factor in apparently healthy donors.
Calculation of the molecular integrals with the range-separated correlation factor
Silkowski, Michał; Moszynski, Robert
2014-01-01
Explicitly correlated quantum chemical calculations require calculations of five types of molecular integrals beyond the standard electron repulsion integrals. We present a novel scheme, which utilises general ideas of the McMurchie-Davidson technique, to compute these integrals when the so-called \\range-separated" correlation factor is used. This correlation factor combines the well-known short range behaviour, resulting from the electronic cusp condition, with the exact long-range asymptotics found for the helium atom [M. Lesiuk, B. Jeziorski, and R. Moszynski, J. Chem. Phys. $\\textbf{139}$, 134102 (2013)]. Almost all steps of the presented procedure are formulated recursively, so that an efficient implementation and control of the precision are possible. Additionally, the present formulation is very flexible and general, and it allows for use of an arbitrary correlation factor in the electronic structure calculations with minor or no changes.
Hou, Shibing; Wu, Jiang; Qin, Yufei; Xu, Zhenming
2010-07-01
Electrostatic separation is an effective and environmentally friendly method for recycling waste printed circuit board (PCB) by several kinds of electrostatic separators. However, some notable problems have been detected in its applications and cannot be efficiently resolved by optimizing the separation process. Instead of the separator itself, these problems are mainly caused by some external factors such as the nonconductive powder (NP) and the superficial moisture of feeding granule mixture. These problems finally lead to an inefficient separation. In the present research, the impacts of these external factors were investigated and a robust design was built to optimize the process and to weaken the adverse impact. A most robust parameter setting (25 kv, 80 rpm) was concluded from the experimental design. In addition, some theoretical methods, including cyclone separation, were presented to eliminate these problems substantially. This will contribute to efficient electrostatic separation of waste PCB and make remarkable progress for industrial applications.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Factors Controlling Redox Speciation of Plutonium and Neptunium in Extraction Separation Processes
Paulenova, Alena [Principal Investigator; Vandegrift, III, George F. [Collaborator
2013-09-24
The objective of the project was to examine the factors controlling redox speciation of plutonium and neptunium in UREX+ extraction in terms of redox potentials, redox mechanism, kinetics and thermodynamics. Researchers employed redox-speciation extractions schemes in parallel to the spectroscopic experiments. The resulting distribution of redox species w studied uring spectroscopic, electrochemical, and spectro-electrochemical methods. This work reulted in collection of data on redox stability and distribution of redox couples in the nitric acid/nitrate electrolyte and the development of redox buffers to stabilize the desired oxidation state of separated radionuclides. The effects of temperature and concentrations on the redox behavior of neptunium were evaluated.
Piringer, Martin; Knauder, Werner; Petz, Erwin; Schauberger, Günther
2016-09-01
Direction-dependent separation distances to avoid odour annoyance, calculated with the Gaussian Austrian Odour Dispersion Model AODM and the Lagrangian particle diffusion model LASAT at two sites, are analysed and compared. The relevant short-term peak odour concentrations are calculated with a stability-dependent peak-to-mean algorithm. The same emission and meteorological data, but model-specific atmospheric stability classes are used. The estimate of atmospheric stability is obtained from three-axis ultrasonic anemometers using the standard deviations of the three wind components and the Obukhov stability parameter. The results are demonstrated for the Austrian villages Reidling and Weissbach with very different topographical surroundings and meteorological conditions. Both the differences in the wind and stability regimes as well as the decrease of the peak-to-mean factors with distance lead to deviations in the separation distances between the two sites. The Lagrangian model, due to its model physics, generally calculates larger separation distances. For worst-case calculations necessary with environmental impact assessment studies, the use of a Lagrangian model is therefore to be preferred over that of a Gaussian model. The study and findings relate to the Austrian odour impact criteria.
Deng, Jiu-shuai; Mao, Ying-bo; Wen, Shu-ming; Liu, Jian; Xian, Yong-jun; Feng, Qi-cheng
2015-02-01
Selective flotation separation of Cu-Zn mixed sulfides has been proven to be difficult. Thus far, researchers have found no satisfactory way to separate Cu-Zn mixed sulfides by selective flotation, mainly because of the complex surface and interface interaction mechanisms in the flotation solution. Undesired activation occurs between copper ions and the sphalerite surfaces. In addition to recycled water and mineral dissolution, ancient fluids in the minerals are observed to be a new source of metal ions. In this study, significant amounts of ancient fluids were found to exist in Cu-Zn sulfide and gangue minerals, mostly as gas-liquid fluid inclusions. The concentration of copper ions released from the ancient fluids reached 1.02 × 10-6 mol/L, whereas, in the cases of sphalerite and quartz, this concentration was 0.62 × 10-6 mol/L and 0.44 × 10-6 mol/L, respectively. As a result, the ancient fluid is a significant source of copper ions compared to mineral dissolution under the same experimental conditions, which promotes the unwanted activation of sphalerite. Therefore, the ancient fluid is considered to be a new factor that affects the selective flotation separation of Cu-Zn mixed sulfide ores.
Separable sustained and selective attention factors are apparent in 5-year-old children.
Underbjerg, Mette; George, Melanie S; Thorsen, Poul; Kesmodel, Ulrik S; Mortensen, Erik L; Manly, Tom
2013-01-01
In adults and older children, evidence consistent with relative separation between selective and sustained attention, superimposed upon generally positive inter-test correlations, has been reported. Here we examine whether this pattern is detectable in 5-year-old children from the healthy population. A new test battery (TEA-Ch(J)) was adapted from measures previously used with adults and older children and administered to 172 5-year-olds. Test-retest reliability was assessed in 60 children. Ninety-eight percent of the children managed to complete all measures. Discrimination of visual and auditory stimuli were good. In a factor analysis, the two TEA-Ch(J) selective attention tasks (one visual, one auditory) loaded onto a common factor and diverged from the two sustained attention tasks (one auditory, one motor), which shared a common loading on the second factor. This pattern, which suggests that the tests are indeed sensitive to underlying attentional capacities, was supported by the relationships between the TEA-Ch(J) factors and Test of Everyday Attention for Children subtests in the older children in the sample. It is possible to gain convincing performance-based estimates of attention at the age of 5 with the results reflecting a similar factor structure to that obtained in older children and adults. The results are discussed in light of contemporary models of attention function. Given the potential advantages of early intervention for attention difficulties, the findings are of clinical as well as theoretical interest.
Application of factor separation to heavy rainfall and cyclogenesis events: Mediterranean examples
Romero, R.
2010-09-01
The Mediterranean basin is an ideal atmospheric research "laboratory" recognized as one of the main cyclogenetic areas in the world. Much of the high impact weather affecting its coastal countries (notably strong winds and heavy precipitations) has been statistically associated with the near presence of a distinct cyclonic signature. The numerical modelling of these atmospheric circulations is the most powerful tool available to scientists to develop a better physical understanding of the responsible mechanisms. In particular, many studies have used this potential to isolate the role played by different physical factors by means of the factor separation technique. Boundary factors (e.g. orography and latent heat flux from the Mediterranean) and model physics factors (e.g. latent heat release in cloud systems) have been typically considered. Different results about the role of both types of factors in Mediterranean flash flood events will be shown and discussed. Comparatively less attention, however, has been paid to the effects due to internal features of the flow dynamics (jet streaks, troughs, fronts, etc) probably because, unlike the boundary of model physics factors, modifying or switching off these elements in the simulations is not straightforward. The three-dimensional nature and mutual dependence of pressure, temperature and wind fields pose serious constraints on the ways these fields can be altered without compromising the delicate dynamical balances that govern both the model equations and actual data. It will be presented a relatively clean approach to deal with these dynamical factors, based on the concept of potential vorticity (PV) and its invertibility principle. The role of upper-level precursor disturbances on heavy rain producing western Mediterranean cyclones will be studied by this PV inversion method. Finally, the applicability of the factor separation method to the study of extratropical cyclones in a framework which does not involve
An absorption maximum was observed at 4.9 microns in infrared spectra of human parotid saliva. The factor causing this absorbance was found to be a...nitrate, and heat stability. Thiocyanate was then determined in 16 parotid saliva samples by a spectrophotometric method, which involved formation of
Setyawan, Daddy; Rohman, Budi
2014-09-01
Verification of Maximum Radial Power Peaking Factor due to insertion of FPM-LEU target in the core of RSG-GAS Reactor. Radial Power Peaking Factor in RSG-GAS Reactor is a very important parameter for the safety of RSG-GAS reactor during operation. Data of radial power peaking factor due to the insertion of Fission Product Molybdenum with Low Enriched Uranium (FPM-LEU) was reported by PRSG to BAPETEN through the Safety Analysis Report RSG-GAS for FPM-LEU target irradiation. In order to support the evaluation of the Safety Analysis Report incorporated in the submission, the assessment unit of BAPETEN is carrying out independent assessment in order to verify safety related parameters in the SAR including neutronic aspect. The work includes verification to the maximum radial power peaking factor change due to the insertion of FPM-LEU target in RSG-GAS Reactor by computational method using MCNP5and ORIGEN2. From the results of calculations, the new maximum value of the radial power peaking factor due to the insertion of FPM-LEU target is 1.27. The results of calculations in this study showed a smaller value than 1.4 the limit allowed in the SAR.
Setyawan, Daddy, E-mail: d.setyawan@bapeten.go.id [Center for Assessment of Regulatory System and Technology for Nuclear Installations and Materials, Indonesian Nuclear Energy Regulatory Agency (BAPETEN), Jl. Gajah Mada No. 8 Jakarta 10120 (Indonesia); Rohman, Budi [Licensing Directorate for Nuclear Installations and Materials, Indonesian Nuclear Energy Regulatory Agency (BAPETEN), Jl. Gajah Mada No. 8 Jakarta 10120 (Indonesia)
2014-09-30
Verification of Maximum Radial Power Peaking Factor due to insertion of FPM-LEU target in the core of RSG-GAS Reactor. Radial Power Peaking Factor in RSG-GAS Reactor is a very important parameter for the safety of RSG-GAS reactor during operation. Data of radial power peaking factor due to the insertion of Fission Product Molybdenum with Low Enriched Uranium (FPM-LEU) was reported by PRSG to BAPETEN through the Safety Analysis Report RSG-GAS for FPM-LEU target irradiation. In order to support the evaluation of the Safety Analysis Report incorporated in the submission, the assessment unit of BAPETEN is carrying out independent assessment in order to verify safety related parameters in the SAR including neutronic aspect. The work includes verification to the maximum radial power peaking factor change due to the insertion of FPM-LEU target in RSG-GAS Reactor by computational method using MCNP5and ORIGEN2. From the results of calculations, the new maximum value of the radial power peaking factor due to the insertion of FPM-LEU target is 1.27. The results of calculations in this study showed a smaller value than 1.4 the limit allowed in the SAR.
Yen-Chun Chou
2010-01-01
Full Text Available Perfusion magnetic resonance brain imaging induces temporal signal changes on brain tissues, manifesting distinct blood-supply patterns for the profound analysis of cerebral hemodynamics. We employed independent factor analysis to blindly separate such dynamic images into different maps, that is, artery, gray matter, white matter, vein and sinus, and choroid plexus, in conjunction with corresponding signal-time curves. The averaged signal-time curve on the segmented arterial area was further used to calculate the relative cerebral blood volume (rCBV, relative cerebral blood flow (rCBF, and mean transit time (MTT. The averaged ratios for rCBV, rCBF, and MTT between gray and white matters for normal subjects were congruent with those in the literature.
Levy-Bencheton, D; Terras, V
2015-01-01
We pursue our study of the antiperiodic dynamical 6-vertex model using Sklyanin's separation of variables approach, allowing in the model new possible global shifts of the dynamical parameter. We show in particular that the spectrum and eigenstates of the antiperiodic transfer matrix are completely characterized by a system of discrete equations. We prove the existence of different reformulations of this characterization in terms of functional equations of Baxter's type. We notably consider the homogeneous functional $T$-$Q$ equation which is the continuous analog of the aforementioned discrete system and show, in the case of a model with an even number of sites, that the complete spectrum and eigenstates of the antiperiodic transfer matrix can equivalently be described in terms of a particular class of its $Q$-solutions, hence leading to a complete system of Bethe equations. Finally, we compute the form factors of local operators for which we obtain determinant representations in finite volume.
Generic Uniqueness of a Structured Matrix Factorization and Applications in Blind Source Separation
Domanov, Ignat; Lathauwer, Lieven De
2016-06-01
Algebraic geometry, although little explored in signal processing, provides tools that are very convenient for investigating generic properties in a wide range of applications. Generic properties are properties that hold "almost everywhere". We present a set of conditions that are sufficient for demonstrating the generic uniqueness of a certain structured matrix factorization. This set of conditions may be used as a checklist for generic uniqueness in different settings. We discuss two particular applications in detail. We provide a relaxed generic uniqueness condition for joint matrix diagonalization that is relevant for independent component analysis in the underdetermined case. We present generic uniqueness conditions for a recently proposed class of deterministic blind source separation methods that rely on mild source models. For the interested reader we provide some intuition on how the results are connected to their algebraic geometric roots.
Paterson, Mike; Thorup, Mikkel; Winkler, Peter; Zwick, Uri
2007-01-01
How far can a stack of $n$ identical blocks be made to hang over the edge of a table? The question dates back to at least the middle of the 19th century and the answer to it was widely believed to be of order $\\log n$. Recently, Paterson and Zwick constructed $n$-block stacks with overhangs of order $n^{1/3}$, exponentially better than previously thought possible. We show here that order $n^{1/3}$ is indeed best possible, resolving the long-standing overhang problem up to a constant factor.
Navarro, J.L.; Madariaga, J.A.; Santamaria, C.M.; Saviron, J.M.; Carrion, J.A.
1985-01-01
Measurements of the separation of liquid mixtures of n-heptane/benzene and carbon tetrachloride/cyclohexane in a thermogravitational column are reported. The results show that thermal diffusion columns of little mechanical precision can furnish suitable thermal diffusion factors when the diffusion coefficient, viscosity, density, and compressibility factor for the mixture are known. 23 references, 3 figures, 1 table.
ZI Xuemin; ZHANG Runchu; LIU Minqian
2006-01-01
Fractional factorial split-plot (FFSP) designs have an important value of investigation for their special structures.There are two types of factors in an FFSP design: the whole-plot (WP) factors and sub-plot (SP) factors,which can form three types of two-factor interactions:WP2fi,WS2fi and SP2fi.This paper considers FFSP designs with resolution Ⅲ or Ⅳ under the clear effects criterion.It derives the upper and lower bounds on the maximum numbers of clear WP2fis and WS2fis for FFSP designs,and gives some methods for constructing the desired FFSP designs.It further examines the performance of the construction methods.
Blind source separation for groundwater pressure analysis based on nonnegative matrix factorization
Alexandrov, Boian S.; Vesselinov, Velimir V.
2014-09-01
The identification of the physical sources causing spatial and temporal fluctuations of aquifer water levels is a challenging, yet a very important hydrogeological task. The fluctuations can be caused by variations in natural and anthropogenic sources such as pumping, recharge, barometric pressures, etc. The source identification can be crucial for conceptualization of the hydrogeological conditions and characterization of aquifer properties. We propose a new computational framework for model-free inverse analysis of pressure transients based on Nonnegative Matrix Factorization (NMF) method for Blind Source Separation (BSS) coupled with k-means clustering algorithm, which we call NMFk. NMFk is capable of identifying a set of unique sources from a set of experimentally measured mixed signals, without any information about the sources, their transients, and the physical mechanisms and properties controlling the signal propagation through the subsurface flow medium. Our analysis only requires information about pressure transients at a number of observation points, m, where m≥r, and r is the number of unknown unique sources causing the observed fluctuations. We apply this new analysis on a data set from the Los Alamos National Laboratory site. We demonstrate that the sources identified by NMFk have real physical origins: barometric pressure and water-supply pumping effects. We also estimate the barometric pressure efficiency of the monitoring wells. The possible applications of the NMFk algorithm are not limited to hydrogeology problems; NMFk can be applied to any problem where temporal system behavior is observed at multiple locations and an unknown number of physical sources are causing these fluctuations.
Amtmann, E.; Kimura, T.; Oyama, J.; Doden, E.; Potulski, M.
1979-01-01
At the age of 30 days female Sprague-Dawley rats were placed on a 3.66 m radius centrifuge and subsequently exposed almost continuously for 810 days to either 2.76 or 4.15 G. An age-matched control group of rats was raised near the centrifuge facility at earth gravity. Three further control groups of rats were obtained from the animal colony and sacrificed at the age of 34, 72 and 102 days. A total of 16 variables were simultaneously factor analyzed by maximum-likelihood extraction routine and the factor loadings presented after-rotation to simple structure by a varimax rotation routine. The variables include the G-load, age, body mass, femoral length and cross-sectional area, inner and outer radii, density and strength at the mid-length of the femur, dry weight of gluteus medius, semimenbranosus and triceps surae muscles. Factor analyses on A) all controls, B) all controls and the 2.76 G group, and C) all controls and centrifuged animals, produced highly similar loading structures of three common factors which accounted for 74%, 68% and 68%. respectively, of the total variance. The 3 factors were interpreted as: 1. An age and size factor which stimulates the growth in length and diameter and increases the density and strength of the femur. This factor is positively correlated with G-load but is also active in the control animals living at earth gravity. 2. A growth inhibition factor which acts on body size, femoral length and on both the outer and inner radius at mid-length of the femur. This factor is intensified by centrifugation.
Factors That Control Successful Entropically Driven Chiral Separations in SFC and HPLC.
Stringham, R W; Blackwell, J A
1997-04-01
With temperature increases, selectivity of chiral separations decreases until enantiomers coelute at an isoelution temperature. Above this temperature, elution order should reverse and selectivity will increase with temperature. In this region, separation is termed "entropically driven". Entropically driven chiral separations hold the promise of being able to concurrently increase selectivity and column efficiency by means of increased temperature. The ability to achieve such separations is hindered by high isoelution temperatures. The isoelution temperature is determined by a balance of enthalpic and entropic contributions. A variety of mobile phase modifiers are evaluated for their ability to moderate these contributions. Results suggest that more use should be made of non-alcohol modifiers. The major barrier to entropically driven separations was found to be the nonspecific retention increase that is characteristic when the critical temperature is traversed. Use of hexane in place of CO(2) shifts the position of the retention increase away from the temperature range used in this study, and dramatically successful entropically driven chiral separations are obtained.
Skarstrom, C.
1959-03-10
A centrifugal separator is described for separating gaseous mixtures where the temperature gradients both longitudinally and radially of the centrifuge may be controlled effectively to produce a maximum separation of the process gases flowing through. Tbe invention provides for the balancing of increases and decreases in temperature in various zones of the centrifuge chamber as the result of compression and expansions respectively, of process gases and may be employed effectively both to neutralize harmful temperature gradients and to utilize beneficial temperaturc gradients within the centrifuge.
Wright, L.; Coddington, O.; Pilewskie, P.
2015-12-01
Current challenges in Earth remote sensing require improved instrument spectral resolution, spectral coverage, and radiometric accuracy. Hyperspectral instruments, deployed on both aircraft and spacecraft, are a growing class of Earth observing sensors designed to meet these challenges. They collect large amounts of spectral data, allowing thorough characterization of both atmospheric and surface properties. The higher accuracy and increased spectral and spatial resolutions of new imagers require new numerical approaches for processing imagery and separating surface and atmospheric signals. One potential approach is source separation, which allows us to determine the underlying physical causes of observed changes. Improved signal separation will allow hyperspectral instruments to better address key science questions relevant to climate change, including land-use changes, trends in clouds and atmospheric water vapor, and aerosol characteristics. In this work, we investigate a Non-negative Matrix Factorization (NMF) method for the separation of atmospheric and land surface signal sources. NMF offers marked benefits over other commonly employed techniques, including non-negativity, which avoids physically impossible results, and adaptability, which allows the method to be tailored to hyperspectral source separation. We adapt our NMF algorithm to distinguish between contributions from different physically distinct sources by introducing constraints on spectral and spatial variability and by using library spectra to inform separation. We evaluate our NMF algorithm with simulated hyperspectral images as well as hyperspectral imagery from several instruments including, the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), NASA Hyperspectral Imager for the Coastal Ocean (HICO) and National Ecological Observatory Network (NEON) Imaging Spectrometer.
Separate Training for Conditional Random Fields Using Co-occurrence Rate Factorization
Zhu, Zhemin; Hiemstra, Djoerd; Apers, Peter M.G.; Wombacher, Andreas
2012-01-01
The standard training method of Conditional Random Fields (CRFs) is very slow for large-scale applications. As an alternative, piecewise training divides the full graph into pieces, trains them independently, and combines the learned weights at test time. In this paper, we present separate training
Separation Anxiety Disorder in Childhood as a Risk Factor for Future Mental Illness
Lewinsohn, Peter M.; Holm-Denoma, Jill M.; Small, Jason W.; Seeley, John R.; Joiner, Thomas E.
2008-01-01
A study to examine the association between childhood separation anxiety disorder (SAD) and the risk of the development of psychopathology during young adulthood was conducted. Results showed that SAD contributed to the risk for the development of internalizing disorders, which are panic and depression, but decreased the risk for externalizing…
Politzer, Peter; Murray, Jane S
2015-02-01
We discuss three molecular/crystalline properties that we believe to be among the factors that influence the impact/shock sensitivities of energetic materials (i.e., their vulnerabilities to unintended detonation due to impact or shock). These properties are (a) the anomalously strong positive electrostatic potentials in the central regions of their molecular surfaces, (b) the free space per molecule in their crystal lattices, and (c) their maximum heats of detonation per unit volume. Overall, sensitivity tends to become greater as these properties increase; however these are general trends, not correlations. Nitramines are exceptions in that their sensitivities show little or no variation with free space in the lattice and heat of detonation per unit volume. We outline some of the events involved in detonation initiation and show how the three properties are related to different ones of these events.
Nakata, Manabu; Okada, Takashi; Komai, Yoshinori; Nohara, Hiroki [Kyoto Univ. (Japan). Hospital
1996-08-01
Modern linear accelerators have four independent jaws and multileaf collimators (MLC) of 1 cm width at the isocenter. Asymmetric fields defined by such independent jaws and irregular multileaf collimated fields can be used to match adjacent fields or to spare the spinal cord in external photon beam radiotherapy. We have developed a new approximate algorithm for depth dose calculations at the collimator rotation axis. The program is based on Clarkson`s principle, and uses a more accurate modification of Day`s method for asymmetric fields. Using this method, tissue-maximum ratios (TMR) and field factors of ten kinds of asymmetric fields and ten different irregular multileaf collimated fields were calculated and compared with the measured data for 6 MV and 15 MV photon beams. The dose accuracy with the general A/Pe method was about 3%, however, with the new modified Day`s method, accuracy was within 1.7% for TMR and 1.2% for field factors. The calculated TMR and field factors were found to be in good agreement with measurements for both the 6 MV and 15 MV photon beams. (author)
Greisen, Mia H; Altar, C Anthony; Bolwig, Tom G; Whitehead, Richard; Wörtwein, Gitta
2005-03-15
Repeated maternal separation of rat pups during the early postnatal period may affect brain-derived neurotrophic factor (BDNF) or neurons in brain areas that are compromised by chronic stress. In the present study, a highly significant increase in hippocampal BDNF protein concentration was found in adult rats that as neonates had been subjected to 180 min of daily separation compared with handled rats separated for 15 min daily. BDNF protein was unchanged in the frontal cortex and hypothalamus/paraventricular nucleus. Expression of BDNF mRNA in the CA1, CA3, or dentate gyrus of the hippocampus or in the paraventricular hypothalamic nucleus was not affected by maternal separation. All animals displayed similar behavioral patterns in a forced-swim paradigm, which did not affect BDNF protein concentration in the hippocampus or hypothalamus. Repeated administration of bromodeoxyuridine revealed equal numbers of surviving, newly generated granule cells in the dentate gyrus of adult rats from the 15 min or 180 min groups. The age-dependent decline in neurogenesis from 3 months to 7 months of age did not differ between the groups. Insofar as BDNF can stimulate neurogenesis and repair, we propose that the elevated hippocampal protein concentration found in maternally deprived rats might be a compensatory reaction to separation during the neonatal period, maintaining adult neurogenesis at levels equal to those of the handled rats.
Bachar, Eytan; Stein, Daniel; Canetti, Laura; Gur, Eitan
2008-11-01
Due to the susceptibility of eating disorders (ED) to stressful life events, we wanted to examine longitudinally whether two childhood adversities: (1) surgery and (2) parental separation, will affect abnormal eating attitudes in adolescents. Consecutively for 4 years, the eating attitude test (EAT-26) and the eating disorder inventory-2 (EDI-2) questionnaires were administered to students from grades 7th through 10th and 8th through 11th. Multilevel analysis revealed that parental separation and oral or cosmetic dermatologic surgeries were significantly correlated with EAT-26 and EDI-2 scores throughout the 4 years of the study. Post-hoc interpretation suggests a connection between (A) chirurgic intervention in the oral cavity and problematic eating attitudes, and (B) cosmetic dermatologic surgery and greater awareness to body appearance-a feature which might characterize adolescents who are prone to develop ED.
On-line method of determining utilization factor in Hg-196 photochemical separation process
Grossman, Mark W.; Moskowitz, Philip E.
1992-01-01
The present invention is directed to a method for determining the utilization factor [U] in a photochemical mercury enrichment process (.sup.196 Hg) by measuring relative .sup.196 Hg densities using absorption spectroscopy.
Collier, Justine
2010-07-01
Cell division in Gram-negative bacteria involves the co-ordinated invagination of the three cell envelope layers to form two new daughter cell poles. This complex process starts with the polymerization of the tubulin-like protein FtsZ into a Z-ring at mid-cell, which drives cytokinesis and recruits numerous other proteins to the division site. These proteins are involved in Z-ring constriction, inner- and outer-membrane invagination, peptidoglycan remodelling and daughter cell separation. Three papers in this issue of Molecular Microbiology, from the teams of Lucy Shapiro, Martin Thanbichler and Christine Jacobs-Wagner, describe a novel protein, called DipM for Division Involved Protein with LysM domains, that is required for cell division in Caulobacter crescentus. DipM localizes to the mid-cell during cell division, where it is necessary for the hydrolysis of the septal peptidoglycan to remodel the cell wall. Loss of DipM results in severe defects in cell envelope constriction, which is deleterious under fast-growth conditions. State-of-the-art microscopy experiments reveal that the peptidoglycan is thicker and that the cell wall is incorrectly organized in DipM-depleted cells compared with wild-type cells, demonstrating that DipM is essential for reorganizing the cell wall at the division site, for envelope invagination and cell separation in Caulobacter.
Simona OZHEK
2007-05-01
Full Text Available The process of separation and individuation is a developmental psychological process, which takes place at various phases of child development within his first three years of life. These phases include the Normal Autistic Phase, the Normal Symbiotic Phase, the Separation-Individuation Phase (with sub-phases Differentiation, Practicing and Rapprochement, On the Way to Object Constancy and the Final Separation and Psychological Birth of the Human Infant. Undisturbed transition through the developmental phases leads to the establishment of psychological structure in the human infant, which means that he reaches autonomy and independence, thus becoming an individual. For children which were born with physical disabilities and have their intellectual capacities preserved (i.e. in cerebral palsy, muscular dystrophy etc., the fulfillment of this process is endangered, because their development was stuck at the Normal Symbiotic Phase due to two types of factors. The first group represents factors of physical disability (i.e. impossibility to move, obtain independent experiences and, consequently, the inability to detach from the Object, while the second group represents reactions to this hindrance (i.e. over-protection, which further thwarts attempts to detach from the Object.
Calculation of the molecular integrals with the range-separated correlation factor
Silkowski, Michał; Lesiuk, Michał, E-mail: lesiuk@tiger.chem.uw.edu.pl; Moszynski, Robert [Faculty of Chemistry, University of Warsaw, Pasteura 1, 02-093 Warsaw (Poland)
2015-03-28
Explicitly correlated quantum chemical calculations require calculations of five types of two-electron integrals beyond the standard electron repulsion integrals. We present a novel scheme, which utilises general ideas of the McMurchie-Davidson technique, to compute these integrals when the so-called “range-separated” correlation factor is used. This correlation factor combines the well-known short range behaviour resulting from the electronic cusp condition, with the exact long-range asymptotics derived for the helium atom [Lesiuk, Jeziorski, and Moszynski, J. Chem. Phys. 139, 134102 (2013)]. Almost all steps of the presented procedure are formulated recursively, so that an efficient implementation and control of the precision are possible. Additionally, the present formulation is very flexible and general, and it allows for use of an arbitrary correlation factor in the electronic structure calculations with minor or no changes.
2010-10-01
... of the acquiring company, the acquiring carrier shall first sum its existing (pre-purchase) access lines (A) with the total access lines acquired from selling company (B). Then, multiply its factors and... TELECOMMUNICATIONS PROPERTY COSTS, REVENUES, EXPENSES, TAXES AND RESERVES FOR TELECOMMUNICATIONS COMPANIES 1...
Briggs, J. P.; Pennycook, S. J.; Fergusson, J. R.; Jäykkä, J.; Shellard, E. P. S.
2016-04-01
We present a case study describing efforts to optimise and modernise "Modal", the simulation and analysis pipeline used by the Planck satellite experiment for constraining general non-Gaussian models of the early universe via the bispectrum (or three-point correlator) of the cosmic microwave background radiation. We focus on one particular element of the code: the projection of bispectra from the end of inflation to the spherical shell at decoupling, which defines the CMB we observe today. This code involves a three-dimensional inner product between two functions, one of which requires an integral, on a non-rectangular domain containing a sparse grid. We show that by employing separable methods this calculation can be reduced to a one-dimensional summation plus two integrations, reducing the overall dimensionality from four to three. The introduction of separable functions also solves the issue of the non-rectangular sparse grid. This separable method can become unstable in certain scenarios and so the slower non-separable integral must be calculated instead. We present a discussion of the optimisation of both approaches. We demonstrate significant speed-ups of ≈100×, arising from a combination of algorithmic improvements and architecture-aware optimisations targeted at improving thread and vectorisation behaviour. The resulting MPI/OpenMP hybrid code is capable of executing on clusters containing processors and/or coprocessors, with strong-scaling efficiency of 98.6% on up to 16 nodes. We find that a single coprocessor outperforms two processor sockets by a factor of 1.3× and that running the same code across a combination of both microarchitectures improves performance-per-node by a factor of 3.38×. By making bispectrum calculations competitive with those for the power spectrum (or two-point correlator) we are now able to consider joint analysis for cosmological science exploitation of new data.
Generic uniqueness of a structured matrix factorization and applications in blind source separation
Domanov, Ignat; De Lathauwer, Lieven
2016-01-01
Algebraic geometry, although little explored in signal processing, provides tools that are very convenient for investigating generic properties in a wide range of applications. Generic properties are properties that hold "almost everywhere". We present a set of conditions that are sufficient for demonstrating the generic uniqueness of a certain structured matrix factorization. This set of conditions may be used as a checklist for generic uniqueness in different settings. We discuss two partic...
Larsen, B; Fox, B L; Burke, M F; Hruby, V J
1979-01-01
Experimental conditions and parameters involved in high performance liquid chromatography (HPLC) separations of the peptide hormone oxytocin and seven of its diastereoisomers, namely [1-hemi-D-cystine]-, [2-D-tyrosine]-, [4-D-glutamine]-, [5-D-asparagine]-, [6-hemi-D-cystine-], [7-D-proline]-, and [8-D-leucine]-oxytocin, on reverse phase columns were investigated. The effects of solvent, pH, and salt concentration were studied. Using the solvent systems 10% tetrahydrofuran-ammonium acetate buffer or 18% acetonitrile-ammonium acetate buffer and the muBondapak C18 support, oxytocin was separated from each of its diastereoisomers under all conditions studied, but the order of elution of diastereoisomers was highly dependent on solvent and to a lesser extent on pH. Separations of the hormone and its diastereoisomers on reverse phase HPLC and on classical partition chromatography on Sephadex G-25 were compared. The results are discussed in terms of the interactions of the solute with the reverse phase column and the solvent system. Implications of these findings in terms of the different solution conformations of the peptides are discussed.
Kangkang; SHAN; Anran; WANG
2015-01-01
China is the world’s largest food producer,and it also has the largest food demand. The stability of China’s food production directly affects the supply and demand situation of the world food market. In the context of evolving Chinese agricultural structure,this paper studies the separation of factors of production from grain and issues concerning food safety. It is found that the arable land for food production within agricultural sector continues to flow to non-food production sector while the arable land is shrinking in China; the process of urbanization of population is the main reason for food production workforce reduction,resulting in a decline in the overall quality of the food production labor.By analyzing the panel data estimation results for food production function,it is found that arable land and labor are still important factors for food production in China at present,and their flow out of food production poses a major threat to food production and security.
L. V. Repin
2013-01-01
Full Text Available An article describes radiation risk factors for several gender-age population groups according to Russian statistical and medical-demographic data, evaluates the lethality rate for separate nosologic forms of malignant neoplasms based on Russian cancer registries according to the method of the International Agency for Cancer Research. Relative damage factors are calculated for the gender-age groups under consideration. The tissue weighting factors recommended by ICRP to calculate effective doses are compared with relative damage factors calculated by ICRP for the nominal population and with similar factors calculated in this work for separate population cohorts in theRussian Federation. The significance of differences and the feasibility of using tissue weighting factors adapted for the Russian population in assessing population risks in cohorts of different gender-age compositions have been assessed.
Joncourt, Raphael; Eberle, Andrea B; Rufener, Simone C; Mühlemann, Oliver
2014-01-01
Nonsense-mediated mRNA decay (NMD), which is best known for degrading mRNAs with premature termination codons (PTCs), is thought to be triggered by aberrant translation termination at stop codons located in an environment of the mRNP that is devoid of signals necessary for proper termination. In mammals, the cytoplasmic poly(A)-binding protein 1 (PABPC1) has been reported to promote correct termination and therewith antagonize NMD by interacting with the eukaryotic release factors 1 (eRF1) and 3 (eRF3). Using tethering assays in which proteins of interest are recruited as MS2 fusions to a NMD reporter transcript, we show that the three N-terminal RNA recognition motifs (RRMs) of PABPC1 are sufficient to antagonize NMD, while the eRF3-interacting C-terminal domain is dispensable. The RRM1-3 portion of PABPC1 interacts with eukaryotic initiation factor 4G (eIF4G) and tethering of eIF4G to the NMD reporter also suppresses NMD. We identified the interactions of the eIF4G N-terminus with PABPC1 and the eIF4G core domain with eIF3 as two genetically separable features that independently enable tethered eIF4G to inhibit NMD. Collectively, our results reveal a function of PABPC1, eIF4G and eIF3 in translation termination and NMD suppression, and they provide additional evidence for a tight coupling between translation termination and initiation.
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
Simona OZHEK
2015-01-01
The process of separation and individuation is a developmental psychological process, which takes place at various phases of child development within his first three years of life. These phases include the Normal Autistic Phase, the Normal Symbiotic Phase, the Separation-Individuation Phase (with sub-phases Differentiation, Practicing and Rapprochement, On the Way to Object Constancy) and the Final Separation and Psychological Birth of the Human Infant. Undisturbed transition through the dev...
Battaglia, Marco; Touchette, Évelyne; Garon-Carrier, Gabrielle; Dionne, Ginette; Côté, Sylvana M.; Vitaro, Frank; Tremblay, Richard E.; Boivin, Michel
2016-01-01
Background: Little is known about how children differ in the onset and evolution of separation anxiety (SA) symptoms during the preschool years, and how SA develops into separation anxiety disorder. In a large, representative population-based sample, we investigated the developmental trajectories of SA symptoms from infancy to school entry, their…
Biba, Mirlinda; Jiang, Eileen; Mao, Bing; Zewge, Daniel; Foley, Joe P; Welch, Christopher J
2013-08-23
New mixed-mode columns consisting of reversed-phase and ion-exchange separation modes were evaluated for the analysis of short RNA oligonucleotides (∼20mers). Conventional analysis for these samples typically involves using two complementary methods: strong anion-exchange liquid chromatography (SAX-LC) for separation based on charge, and ion-pair reversed-phase liquid chromatography (IP-RPLC) for separation based on hydrophobicity. Recently introduced mixed-mode high performance liquid chromatography (HPLC) columns combine both reversed-phase and ion-exchange modes, potentially offering a simpler analysis by combining the benefits of both separation modes into a single method. Analysis of a variety of RNA oligonucleotide samples using three different mixed-mode stationary phases showed some distinct benefits for oligonucleotide separation and analysis. When using these mixed-mode columns with typical IP-RPLC mobile phase conditions, such as ammonium acetate or triethylammonium acetate as the primary ion-pair reagent, the separation was mainly based on the IP-RPLC mode. However, when changing the mobile phase conditions to those more typical for SAX-LC, such as salt gradients with NaCl or NaBr, very different separation patterns were observed due to mixed-mode interactions. In addition, the Scherzo SW-C18 and SM-C18 columns with sodium chloride or sodium bromide salt gradients also showed significant improvements in peak shape.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Meyer, Raphael; Schädler, Bruno; Viviroli, Daniel; Weingartner, Rolf
2010-05-01
Base flow is a desirable entity to know, for water management in general and particularly for climate change impact studies. Base flow is most often defined as that part of total discharge which origins from delayed storages in a river catchment. During a prolonged period without rain, base flow is the sole contributor to discharge. Base flow therefore makes a river perennial. A high base flow contribution to total annual discharge makes a river more stable in respect of meteorological droughts. Annual base flow from a catchment cannot be determined exactly. Only total discharge can be measured with high accuracy. Therefore, base flow has to be estimated with appropriate methods. Calculating an entity which cannot be verified by measurements is easy. By defining the entity with a calculation procedure, the result is numerically always right. It is actually much more difficult to understand the results, i.e. how these outcomes should be interpreted. The present study investigates the application of three different base flow separation procedures for numerous (up to 40) meso-scale catchments in Switzerland. The methods Demuth (1993), Wittenberg (1999) and Institute of Hydrology (1980) are different approaches to determine base flow, based on daily runoff data. The method Demuth, and the separation of base flow according to Institute of Hydrology, are statistical methods. Demuth is based on the graphical approach of Kille (1970), and the procedure of the Institute of Hydrology is an empirical smoothing method. In contrast to this, the method by Wittenberg does not presume linearity between storage and outflow. Analyzing the results, among each other and in comparison with physiographic characteristics of the catchments under consideration, leads to a more detailed picture of the ongoing processes. At least the dominant control factors for base flow in the Swiss Midlands should be detectable. These are expected to be found first of all among geology and climate, which
HUANG Qian; WANG Yun-dan; WU Tao; JIANG Shan; HU Yan-ling; PEI Guo-xian
2009-01-01
Background Platelet-rich plasma (PRP) as a storage vehicle of growth factors has been successfully used in clinical applications, but in most cases the platelets were autologous. However, the large volume of blood withdrawn has detrimental effects on patients with anemia or poor general health. To overcome these limitations, this study was designed to separate the growth factors in homologous platelet-rich plasma. Methods The gel chromatography with Superdex-75 column was applied to separate PRP supernatants into 4 major fractions. Then the four fractions were vacuumed freeze-dried and re-dissolved in phosphate buffered saline. Proteins concentrations in PRP and in four fractions were detected by bicinchoninic acid protein assay; platelet derived growth factor-AB (PDGF-AB) and transforming growth factor 131 (TGF-β1) levels were determined by sandwich enzyme-linked immunosorbent assays. The effects of fractions on the proliferation of human marrow-derived mesenchymal stem cells (MSCs) were determined by 3-(4, 5- dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide (MTT) assay. Results PRP supernatants were separated into four major fractions by gel chromatography. The proteins recovery was 96.72%. Of the four fractions, fraction B contained the highest TGF-β1 and PDGF-AB levels, and the highest proteins concentrations. Cell proliferation curves of MSC demonstrated that fraction B and C induced a remarkable increase of MTT values compared to the untreated culture (P 0.05). Fraction A and D showed no significant difference to the negative control group (P >0.05). Conclusions The growth factors in PRP supernatants could be preliminarily separated into four fractions by gel chromatography, and the freeze-drying fractions retained the biological activity of growth factors. The growth factors were mostly presented in fraction B and C, and they promoted cell proliferation effectively.
2015-01-01
In the thermal infrared (TIR) waveband, solving the target emissivity spectrum and temperature leads to an ill-posed problem in which the number of unknown parameters is larger than that of available measurements. Generally, the approaches developed for solving this kind of problems are called, by a joint name, the TES (temperature and emissivity separation) algorithm. As is shown in the name, the TES algorithm is dedicated to separating the target temperature and emissivity in the calculating procedure. In this paper, a novel method called the new MaxEnt (maximum entropy) TES algorithm is proposed, which is considered as a promotion of the MaxEnt TES algorithm proposed by Barducci. The maximum entropy estimation is utilized as the basic framework in the two preceding algorithms, so that the two algorithms both could make temperature and emissivity separation, independent of experiential information derived by some special data bases. As a result, the two algorithms could be applied to solve the temperature and emissivity spectrum of the targets which are absolutely unknown to us. However, what makes the two algorithms different is that the alpha spectrum derived by the ADE (alpha derived emissivity) method is considered as priori information to be added in the new MaxEnt TES algorithm. Based on the Wien approximation, the ADE method is dedicated to the calculation of the alpha spectrum which has a similar distribution to the true emissivity spectrum. Based on the preceding promotion, the new MaxEnt TES algorithm keeps a simpler mathematical formalism. Without any doubt, the new MaxEnt TES algorithm provides a faster computation for large volumes of data (i.e. hyperspectral images of the Earth). Some numerical simulations have been performed; the data and results show that, the maximum RMSE of emissivity estimation is 0.017, the maximum absolute error of temperature estimation is 0.62 K. Added with Gaussian white noise in which the signal to noise ratio is measured
Karim Ghani, Wan Azlina Wan Ab., E-mail: wanaz@eng.upm.edu.my [Department of Chemical and Environmental Engineering, Faculty of Engineering, University Putra Malaysia, 43400 Serdang, Selangor Darul Ehsan (Malaysia); Rusli, Iffah Farizan, E-mail: iffahrusli@yahoo.com [Department of Chemical and Environmental Engineering, Faculty of Engineering, University Putra Malaysia, 43400 Serdang, Selangor Darul Ehsan (Malaysia); Biak, Dayang Radiah Awang, E-mail: dayang@eng.upm.edu.my [Department of Chemical and Environmental Engineering, Faculty of Engineering, University Putra Malaysia, 43400 Serdang, Selangor Darul Ehsan (Malaysia); Idris, Azni, E-mail: azni@eng.upm.edu.my [Department of Chemical and Environmental Engineering, Faculty of Engineering, University Putra Malaysia, 43400 Serdang, Selangor Darul Ehsan (Malaysia)
2013-05-15
Highlights: ► Theory of planned behaviour (TPB) has been conducted to identify the influencing factors for participation in source separation of food waste using self administered questionnaires. ► The findings suggested several implications for the development and implementation of waste separation at home programme. ► The analysis indicates that the attitude towards waste separation is determined as the main predictors where this in turn could be a significant predictor of the repondent’s actual food waste separation behaviour. ► To date, none of similar have been reported elsewhere and this finding will be beneficial to local Authorities as indicator in designing campaigns to promote the use of waste separation programmes to reinforce the positive attitudes. - Abstract: Tremendous increases in biodegradable (food waste) generation significantly impact the local authorities, who are responsible to manage, treat and dispose of this waste. The process of separation of food waste at its generation source is identified as effective means in reducing the amount food waste sent to landfill and can be reused as feedstock to downstream treatment processes namely composting or anaerobic digestion. However, these efforts will only succeed with positive attitudes and highly participations rate by the public towards the scheme. Thus, the social survey (using questionnaires) to analyse public’s view and influencing factors towards participation in source separation of food waste in households based on the theory of planned behaviour technique (TPB) was performed in June and July 2011 among selected staff in Universiti Putra Malaysia, Serdang, Selangor. The survey demonstrates that the public has positive intention in participating provided the opportunities, facilities and knowledge on waste separation at source are adequately prepared by the respective local authorities. Furthermore, good moral values and situational factors such as storage convenience and
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Zheng, Bin
2013-07-16
Gate opening of zeolitic imidazolate frameworks (ZIFs) is an important microscopic phenomenon in explaining the adsorption, diffusion, and separation processes for large guest molecules. We present a force field, with input from density functional theory (DFT) calculations, for the molecular dynamics simulation on the gate opening in ZIF-8. The computed self-diffusivities for sorbed C1 to C3 hydrocarbons were in good agreement with the experimental values. The observed sharp diffusion separation from C2H6 to C3H8 was elucidated by investigating the conformations of the guest molecules integrated with the flexibility of the host framework. © 2013 American Chemical Society.
胎盘早剥危险因素的临床分析%Clinical analysis of risk factors for premature separation of placenta
杨丽华; 谢穗
2013-01-01
目的 分析胎盘早剥的危险因素,为胎盘早剥的防治提供参考.方法 将到该院就治的74例胎盘早剥孕产妇的临床资料进行回顾性分析,并以50例健康孕产妇作为对照,对胎盘早剥危险因素进行Logistic 回归分析.结果 Logistic 回归分析显示74例胎盘早剥孕产妇危险因素主要有妊娠高血压、机械损伤、长期吸烟史、滥用可卡因史、胎膜早破、血栓形成倾向等.结论 胎盘早剥的危险因素复杂多样,孕产妇应加强孕期检查,及早诊治,改善新生儿不良预后.%Objective To analyze the risk factors for premature separation of placenta,and to provide reference for the prevention and treatment of premature separation of placenta. Methods Clinical data were retrospectively analyzed of 74 cases of premature separation of placenta, while 50 healthy pregnant women were set as controls. Logistic regression analysis was made of the risk factors of premature separation of placenta. Results Logistic regression analysis showed that the risk factors included hypertension,mechanical damage,long history of smoking, history of cocaine abuse, premature rupture of membranes and thrombophilia. Conclusion The risk factors for premature separation of placenta are complex and diverse. Ihe maternal should strengthen prenatal,early diagnosis and treatment to improve the poor prognosis of newborns.
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
The Influence Factors Analysis of Preschoolers' Separation Anxiety%幼儿分离焦虑影响因素探析
黄睿
2015-01-01
幼儿的分离焦虑反映了不同幼儿依恋类型及家庭教养方式的特点。本文以依恋理论的研究为基础，从抚养者、儿童和外围环境三个角度探讨分离焦虑的影响因素。%Preschoolers' separation anxiety is the upset behavior and emotion arising from the separation between themselves and their care givers,which appears to be different based on the different attachment types and family care models. This paper, based on the attachment theory, explains the influence factors of the preschoolers' separation anxiety from the perspective of care givers, children and the environment.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Karim Ghani, Wan Azlina Wan Ab; Rusli, Iffah Farizan; Biak, Dayang Radiah Awang; Idris, Azni
2013-05-01
Tremendous increases in biodegradable (food waste) generation significantly impact the local authorities, who are responsible to manage, treat and dispose of this waste. The process of separation of food waste at its generation source is identified as effective means in reducing the amount food waste sent to landfill and can be reused as feedstock to downstream treatment processes namely composting or anaerobic digestion. However, these efforts will only succeed with positive attitudes and highly participations rate by the public towards the scheme. Thus, the social survey (using questionnaires) to analyse public's view and influencing factors towards participation in source separation of food waste in households based on the theory of planned behaviour technique (TPB) was performed in June and July 2011 among selected staff in Universiti Putra Malaysia, Serdang, Selangor. The survey demonstrates that the public has positive intention in participating provided the opportunities, facilities and knowledge on waste separation at source are adequately prepared by the respective local authorities. Furthermore, good moral values and situational factors such as storage convenience and collection times are also encouraged public's involvement and consequently, the participations rate. The findings from this study may provide useful indicator to the waste management authorities in Malaysia in identifying mechanisms for future development and implementation of food waste source separation activities in household programmes and communication campaign which advocate the use of these programmes.
2015-01-01
The rupture of a conjugal relationship has both a positive and negative impact on the lives of immediate family members. Although for many women terminating marriage may signal freedom from an oppressive, even violent conjugal relationship, it is undeniable that this separation also results in strong social pressure and discrimination in certain contexts, a situation which limits the woman’s freedom of action in and ou...
Borisevich, V. D.; Juromskiy, V. M.
2016-09-01
A method for designing adaptive systems for automatic extremum search to stabilize the power factor of local electric power system of electric is considered. It consists in application of the serially connected capacitors compensating the reactive component of the total electric power of in parallel connected centrifugal machines usually called as an aggregate. Operation of the system just demands measuring voltage at the output of the static frequency converter for electric drives. The proposed control system is designed to stabilize the power factor close to unity in a case of alteration of parameters of a separation cascade or a single separation device in an aggregate. Such system can be operated continuously or connected occasionally depending on a technological situation. In addition, it totally excludes the phenomenon of overcompensation.
Bodlaender, H.L.; Koster, A.M.C.A.
2003-01-01
A set of vertices S Í V is called a safe separator for treewidth, if S is a separator of G, and the treewidth of G equals the maximum of the treewidth over all connected components W of G - S of the graph, obtained by making S a clique in the subgraph of G, induced by W È S. We show that such safe s
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
影响内贸煤炭船最大载货能力的因素研究%Study on Influence Factors of Domestic Coal Ship Maximum Cargo Capacity
王威
2015-01-01
Under the influence of domestic coal ship navigation area, seasonal, oil-water reserves, con-stant, ship ballast water storage and other factors, the change of large cargo capacity, the paper gives the cal-culation method of domestic coal ship maximum cargo capacity, analysis of the factors affecting the domestic coal ship maximum cargo capacity and determination method, is of guiding significance in practical work.%内贸煤炭船受航区、季节、油水储备量、船舶常数、压载水存量等因素的影响，其载货能力变化较大。文章给出了计算内贸煤炭船最大载货能力的方法，分析了影响内贸煤炭船最大载货能力的各项因素及确定方法，对实际工作有指导意义。
Decomposition of spectra using maximum autocorrelation factors
Larsen, Rasmus
2001-01-01
into classification or regression type analyses. A featured method for low dimensional representation of multivariate datasets is Hotellings principal components transform. We will extend the use of principal components analysis incorporating new information into the algorithm. This new information consists......This paper addresses the problem of generating a low dimensional representation of the variation present in a set of spectra, e.g. reflection spectra recorded from a series of objects. The resulting low dimensional description may subseque ntly be input through variable selection schemes...... Fourier decomposition these new variables are located in frequency as well as well wavelength. The proposed algorithm is tested on 100 samples of NIR spectra of wheat....
Shape Modelling Using Maximum Autocorrelation Factors
Larsen, Rasmus
2001-01-01
of the training set are in reality a time series, e.g.\\$\\backslash\\$ snapshots of a beating heart during the cardiac cycle or when the shapes are slices of a 3D structure, e.g. the spinal cord. Second, in almost all applications a natural order of the landmark points along the contour of the shape is introduced......This paper addresses the problems of generating a low dimensional representation of the shape variation present in a training set after alignment using Procrustes analysis and projection into shape tangent space. We will extend the use of principal components analysis in the original formulation...... of Active Shape Models by Timothy Cootes and Christopher Taylor by building new information into the model. This new information consists of two types of prior knowledge. First, in many situation we will be given an ordering of the shapes of the training set. This situation occurs when the shapes...
Briggs, J; Fergusson, J R; Shellard, E P S; Pennycook, S J
2015-01-01
We study the optimisation and porting of the "Modal" code on Intel(R) Xeon(R) processors and/or Intel(R) Xeon Phi(TM) coprocessors using methods which should be applicable to more general compute bound codes. "Modal" is used by the Planck satellite experiment for constraining general non-Gaussian models of the early universe via the bispectrum of the cosmic microwave background. We focus on the hot-spot of the code which is the projection of bispectra from the end of inflation to spherical shell at decoupling which defines the CMB we observe. This code involves a three-dimensional inner product between two functions, one of which requires an integral, on a non-rectangular sparse domain. We show that by employing separable methods this calculation can be reduced to a one dimensional summation plus two integrations reducing the dimensionality from four to three. The introduction of separable functions also solves the issue of the domain allowing efficient vectorisation and load balancing. This method becomes un...
Beatriz Alvarado
2015-10-01
Full Text Available The rupture of a conjugal relationship has both a positive and negative impact on the lives of immediate family members. Although for many women terminating marriage may signal freedom from an oppressive, even violent conjugal relationship, it is undeniable that this separation also results in strong social pressure and discrimination in certain contexts, a situation which limits the woman’s freedom of action in and outside of the home. The purpose of this descriptive, phenomenological study is to explore the experiences of 15 Peruvian, urban-based mothers, all of whom made the decision to exchange marriage for single parenthood within the confines of a strong patriarchal system. The study follows the actions of the women as they seek to overcome obstacles related to parenting and the management of their respective households. Three emerging themes are identified in this study: (a the development of the woman’s relationship as wife and mother, (b impact of the separation/divorce on the maternal role, and (c experiences in the single parent household. Implications for social research studies and practice are discussed.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Battaglia, Marco; Ogliari, Anna; D'Amato, Francesca; Kinkead, Richard
2014-10-01
Genetically informative studies showed that genetic and environmental risk factors act and interact to influence liability to (a) panic disorder, (b) its childhood precursor separation anxiety disorder, and (c) heightened sensitivity to CO2, an endophenotype common to both disorders. Childhood adversities including parental loss influence both panic disorder and CO2 hypersensitivity. However, childhood parental loss and separation anxiety disorder are weakly correlated in humans, suggesting the presence of alternative pathways of risk. The transferability of tests that assess CO2 sensitivity - an interspecific quantitative trait common to all mammals - to the animal laboratory setting allowed for environmentally controlled studies of early parental separation. Animal findings paralleled those of human studies, in that different forms of early maternal separation in mice and rats evoked heightened CO2 sensitivity; in mice, this could be explained by gene-by-environment interactional mechanisms. While several questions and issues (including obvious divergences between humans and rodents) remain open, parallel investigations by contemporary molecular genetic tools of (1) human longitudinal cohorts and (2) animals in controlled laboratory settings, can help elucidate the mechanisms beyond these phenomena.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Pickett, C.A.; Gutierrez-Hartmann, A. [Univ. of Colorado Health Sciences Center, Denver, CO (United States)
1995-12-01
This report discusses the role of the epidermal growth factor (EGF) in promoting activation of the rat prolactin promoter in neuroendocrine cells via a Ras-independent mechanism. It also discusses the role of phosphotransferases in mediating EGF response. 32 refs., 8 figs., 1 tab.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
李建中; 朱军; 张飞猛; 胡敬坤
2011-01-01
分析影响某型车载炮最大射程地面密集度的主要因素,并对在高低齿弧上的作用力和弹药的影响进行重点探讨,提出提高地面密集度试验精度的措施,为密集度试验方法改进提供了依据.%The main factors influencing the consistency at the maximum range of a type of truckmounted artillery are analyzed.Measures to improve the precision of the consistency test are suggested based on a detailed discussion of the influence of the force acting on the elevating gear arc and ammunition parameters in detail.The work provides a theoretical basis for improving the consistency test method for the truck-mounted artillery.
Krugman, Dorothy C.
1971-01-01
Discusses the role of the caseworker in providing support to children experiencing separation from their families and emphasizes the need to recognize that there are differences between those separation experiences dictated by the needs of children and those dictated by arbitrary or noncasework factors. (AJ)
Karan, Belgin; Pourbagher, Aysin; Torun, Nese
2016-06-01
To evaluate the correlations between the apparent diffusion coefficient (ADC) value and the standardized uptake value (SUV) with prognostic factors in breast cancer. Seventy women with invasive breast cancer (56 cases of invasive ductal carcinoma, four of mixed ductal and lobular invasive carcinoma, three of lobular invasive carcinoma, two of micropapillary carcinoma, and one each of mixed ductal and mucinous carcinoma, mucinous carcinoma, medullary carcinoma, metaplastic carcinoma, and tubular carcinoma) were included in this study. All patients underwent presurgical breast magnetic resonance imaging (MRI) with diffusion-weighted imaging (DWI) at 1.5T and whole-body (18) F-fluorodeoxyglucose ((18) F-FDG) positron emission tomography (PET) / computed tomography (CT). For all invasive breast cancers and invasive ductal carcinomas, we assessed the relationships among ADC, SUV, and pathological prognostic factors. Both the median ADC value and maximum SUV (SUVmax) were significantly associated with vascular invasion (P = 0.008 and P = 0.026, respectively). SUVmax was also significantly correlated with tumor size (P = 0.001), histological grade (P = 0.001), lymph node status (P = 0.0015), estrogen receptor status (P = 0.010), and human epidermal growth factor receptor 2 status (P = 0.020), whereas ADC values were not. The correlation between the ADC and SUVmax was not significant (P = 0.356; R = -0.112). Mucinous carcinoma showed high ADC and relatively low SUVmax. Medullary carcinoma showed low ADC and high SUVmax. When we evaluated the relationships among ADC, SUVmax, and prognostic factors in the 56 invasive ductal carcinomas, our statistical results were not significantly changed, except SUVmax was also significantly associated with progesterone receptor status (P = 0.034), but not lymph node status. SUVmax may be valuable for predicting the prognosis of breast cancer. Both ADC and SUVmax are useful to predict vascular invasion. J. Magn. Reson. Imaging 2016
Imam Santoso
2010-06-01
Full Text Available The aim of this research is to study the effect of concentration carrier, pH and time of extraction on separation's factor of penicillin g - phenyl acetate by reactive extraction technique. The 10 mL aqueous solution with variation of pH : 5, 6 contains 0.001 M penicillin G and 0.001 M phenyl acetate has been extracted with 10 mL n-butyl acetate contains dioctylamine as carrier. Variation concentration of carrier were 0.000; 0.002; 0.004; 0.006 and 0.008 M. Variation time of extraction were 1, 5, 10, 15 and 20 min. The penicillin G and phenyl acetate that dissolved in organic phase ha been reextracted with 10 mL aqueous with variation of pH : 7, 8. The optimum condition obtained as follow : concentration dioctylamine was 0.002M ; pH the first phase water was 5 and the second phase water was 8 ; and the time of extraction was 10 min. Keywords: Separation factor, Reactive extraction
Reynolds, John C.
2002-01-01
expressions) for accessing and modifying shared structures, and for explicit allocation and deallocation of storage. Assertions are extended by introducing a "separating conjunction" that asserts that its sub-formulas hold for disjoint parts of the heap, and a closely related "separating implication". Coupled......, dynamically allocated arrays, and recursive procedures. We will also discuss promising future directions....
Stein, Uri; Fox-Rabinovitz, Michael
1999-01-01
The factor separation (FS) technique has been utilized to evaluate quantitatively the impact of surface boundary forcings on simulation of the 1988 summer drought over the Midwestern part of the U.S. The four surface boundary forcings used are: (1)Sea Surface Temperature (SST), (2) soil moisture, (3) snow cover, and (4) sea ice. The Goddard Earth Observing System(GEOS) General Circulation Model (GCM) is used to simulate the 1988 U.S. drought. A series of sixteen simulations are performed with climatological and real 1988 surface boundary conditions. The major single and mutual synergistic factors/impacts are analyzed. The results show that SST and soil moisture are the major single pro-drought factors. The couple synergistic effect of SST and soil moisture is the major anti-drought factor. The triple synergistic impact of SST, soil moisture, and snow cover is the strongest pro-drought impact and is, therefore, the main contributor to the generation of the drought. The impact of the snow cover and sea ice anomalies for June 1988 on the drought is significant only when combined with the SST and soil moisture anomalies.
Learning Isometric Separation Maps
Vasiloglou, Nikolaos; Anderson, David V
2008-01-01
Maximum Variance Unfolding (MVU) and its variants have been very successful in embedding data-manifolds in lower dimensionality spaces, often revealing the true intrinsic dimensions. In this paper we show how to also incorporate supervised class information into an MVU-like method without breaking its convexity. We call this method the Isometric Separation Map and we show that the resulting kernel matrix can be used for a binary/multiclass Support Vector Machine in a semi-supervised (transductive) framework. We also show that the method always finds a kernel matrix that linearly separates the training data exactly without projecting them in infinite dimensional spaces.
[Separation anxiety in children].
Purper-Ouakil, Diane; Franc, Nathalie
2010-06-20
Separation anxiety disorder can be differentiated from developmental anxiety because of its intensity, persistence and negative impact on adaptive functioning. This disorder is closely linked to other anxiety and mood disorders and can also be associated with externalizing psychopathology in children and adolescents. Severe separation anxiety can result in school refusal and intra-familial violence. Cognitive behavioral therapies have the best evidence-based support for the treatment of separation anxiety disorder in children and adolescents. In addition, it is important to detect factors associated with persistence of anxiety such as systematic avoidance of separation and parental overprotection. The role of pediatricians and general practitioners in recognizing clinical separation anxiety and encouraging appropriate care and positive parental attitudes is essential, as separation anxiety is often associated with a variety of somatic symptoms.
Smith, Corey L; Matheson, Timothy D; Trombly, Daniel J; Sun, Xiaoming; Campeau, Eric; Han, Xuemei; Yates, John R; Kaufman, Paul D
2014-09-15
Chromatin assembly factor-1 (CAF-1) is a three-subunit protein complex conserved throughout eukaryotes that deposits histones during DNA synthesis. Here we present a novel role for the human p150 subunit in regulating nucleolar macromolecular interactions. Acute depletion of p150 causes redistribution of multiple nucleolar proteins and reduces nucleolar association with several repetitive element-containing loci. Of note, a point mutation in a SUMO-interacting motif (SIM) within p150 abolishes nucleolar associations, whereas PCNA or HP1 interaction sites within p150 are not required for these interactions. In addition, acute depletion of SUMO-2 or the SUMO E2 ligase Ubc9 reduces α-satellite DNA association with nucleoli. The nucleolar functions of p150 are separable from its interactions with the other subunits of the CAF-1 complex because an N-terminal fragment of p150 (p150N) that cannot interact with other CAF-1 subunits is sufficient for maintaining nucleolar chromosome and protein associations. Therefore these data define novel functions for a separable domain of the p150 protein, regulating protein and DNA interactions at the nucleolus.
Noise removal in multichannel image data by a parametric maximum noise fraction estimator
Conradsen, Knut; Ersbøll, Bjarne Kjær; Nielsen, Allan Aasbjerg
1991-01-01
Some approaches to noise removal in multispectral imagery are presented. The primary contribution of the present work is the establishment of several ways of estimating the noise covariance matrix from image data and a comparison of the noise separation performances. A case study with Landsat MSS...... data demonstrates that the principal components are not sorted correctly in terms of visual image quality, whereas the minimum/maximum autocorrelation factors and the maximum noise fractions (MAFs) are. A case study with Landsat TM data shows an ordering which is consistent with the spatial wavelength...... in the components. The case studies indicate that a better noise separation is attained when using more complex noise models than the simple model implied by MAF analysis. (L.M.)...
[Separation anxiety. Theoretical considerations].
Blandin, N; Parquet, P J; Bailly, D
1994-01-01
The interest in separation anxiety is nowadays increasing: this disorder appearing during childhood may predispose to the occurrence of anxiety disorders (such as panic disorder and agoraphobia) and major depression into adulthood. Psychoanalytic theories differ on the nature of separation anxiety and its place in child development. For some authors, separation anxiety must be understood as resulting from the unconscious internal conflicts inherent in the individuation process and gradual attainment of autonomy. From this point of view, the fear of loss of mother by separation is not regarded as resulting from a real danger. However, Freud considers the primary experience of separation from protecting mother as the prototype situation of anxiety and compares the situations generating fear to separation experiences. For him, anxiety originates from two factors: the physiological fact is initiated at the time of birth but the primary traumatic situation is the separation from mother. This point of view may be compared with behavioral theories. Behavioral theories suggest that separation anxiety may be conditioned or learned from innate fears. In Freud's theory, the primary situation of anxiety resulting from the separation from mother plays a role comparable to innate fears. Grappling with the problem of separation anxiety, Bowlby emphasizes then the importance of the child's attachment to one person (mother or primary caregiver) and the fact that this attachment is instinctive. This point of view, based on the watch of infants, is akin to ethological theories on behaviour of non human primates. Bowlby especially shows that the reactions of infant separated from mother evolve on three stages: the phase of protestation which may constitute the prototype of adulthood anxiety, the phase of desperation which may be the prototype of depression, and the phase of detachment. He emphasizes so the role of early separations in the development of vulnerability to depression
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
2016-01-01
Footage of the 70 degree ISOLDE GPS separator magnet MAG70 as well as the switchyard for the Central Mass and GLM (GPS Low Mass) and GHM (GPS High Mass) beamlines in the GPS separator zone. In the GPS20 vacuum sector equipment such as the long GPS scanner 482 / 483 unit, faraday cup FC 490, vacuum valves and wiregrid piston WG210 and WG475 and radiation monitors can also be seen. Also the RILIS laser guidance and trajectory can be seen, the GPS main beamgate switch box and the actual GLM, GHM and Central Beamline beamgates in the beamlines as well as the first electrostatic quadrupoles for the GPS lines. Close up of the GHM deflector plates motor and connections and the inspection glass at the GHM side of the switchyard.
2016-01-01
Footage of the 90 and 60 degree ISOLDE HRS separator magnets in the HRS separator zone. In the two vacuum sectors HRS20 and HRS30 equipment such as the HRS slits SL240, the HRS faraday cup FC300 and wiregrid WG210 can be spotted. Vacuum valves, turbo pumps, beamlines, quadrupoles, water and compressed air connections, DC and signal cabling can be seen throughout the video. The HRS main and user beamgate in the beamline between MAG90 and MAG60 and its switchboxes as well as all vacuum bellows and flanges are shown. Instrumentation such as the HRS scanner unit 482 / 483, the HRS WG470 wiregrid and slits piston can be seen. The different quadrupoles and supports are shown as well as the RILIS guidance tubes and installation at the magnets and the different radiation monitors.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Nylon separators. [thermal degradation
Lim, H. S.
1977-01-01
A nylon separator was placed in a flooded condition in K0H solution and heated at various high temperatures ranging from 60 C to 110 C. The weight decrease was measured and the molecular weight and decomposition product were analyzed to determine: (1) the effect of K0H concentration on the hydrolysis rate; (2) the effect of K0H concentration on nylon degradation; (3) the activation energy at different K0H concentrations; and (4) the effect of oxygen on nylon degradation. The nylon hydrolysis rate is shown to increase as K0H concentration is decreased 34%, giving a maximum rate at about 16%. Separator hydrolysis is confirmed by molecular weight decrease in age of the batteries, and the reaction of nylon with molecular oxygen is probably negligible, compared to hydrolysis. The extrapolated rate value from the high temperature experiment correlates well with experimental values at 35 degrees.
Vesselinov, V. V.; Alexandrov, B.
2014-12-01
The identification of the physical sources causing spatial and temporal fluctuations of state variables such as river stage levels and aquifer hydraulic heads is challenging. The fluctuations can be caused by variations in natural and anthropogenic sources such as precipitation events, infiltration, groundwater pumping, barometric pressures, etc. The source identification and separation can be crucial for conceptualization of the hydrological conditions and characterization of system properties. If the original signals that cause the observed state-variable transients can be successfully "unmixed", decoupled physics models may then be applied to analyze the propagation of each signal independently. We propose a new model-free inverse analysis of transient data based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS) coupled with k-means clustering algorithm, which we call NMFk. NMFk is capable of identifying a set of unique sources from a set of experimentally measured mixed signals, without any information about the sources, their transients, and the physical mechanisms and properties controlling the signal propagation through the system. A classical BSS conundrum is the so-called "cocktail-party" problem where several microphones are recording the sounds in a ballroom (music, conversations, noise, etc.). Each of the microphones is recording a mixture of the sounds. The goal of BSS is to "unmix'" and reconstruct the original sounds from the microphone records. Similarly to the "cocktail-party" problem, our model-freee analysis only requires information about state-variable transients at a number of observation points, m, where m > r, and r is the number of unknown unique sources causing the observed fluctuations. We apply the analysis on a dataset from the Los Alamos National Laboratory (LANL) site. We identify and estimate the impact and sources are barometric pressure and water-supply pumping effects. We also estimate the
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Bohman, Hannes; Låftman, Sara Brolin; Päären, Aivar; Jonsson, Ulf
2017-03-29
Earlier research has investigated the association between parental separation and long-term health outcomes among offspring, but few studies have assessed the potentially moderating role of mental health status in adolescence. The aim of this study was to analyze whether parental separation in childhood predicts depression in adulthood and whether the pattern differs between individuals with and without earlier depression. A community-based sample of individuals with adolescent depression in 1991-93 and matched non-depressed peers were followed up using a structured diagnostic interview after 15 years. The participation rate was 65% (depressed n = 227; non-depressed controls n = 155). Information on parental separation and conditions in childhood and adolescence was collected at baseline. The outcome was depression between the ages 19-31 years; information on depression was collected at the follow-up diagnostic interview. The statistical method used was binary logistic regression. Our analyses showed that depressed adolescents with separated parents had an excess risk of recurrence of depression in adulthood, compared with depressed adolescents with non-separated parents. In addition, among adolescents with depression, parental separation was associated with an increased risk of a switch to bipolar disorder in adulthood. Among the matched non-depressed peers, no associations between parental separation and adult depression or bipolar disorder were found. Parental separation may have long-lasting health consequences for vulnerable individuals who suffer from mental illness already in adolescence.
Jarret, Guillaume; Martinez, José; Dourmad, Jean-Yves
2011-11-01
In the guideline for the determination of methane (CH 4) emission from animal manure (IPCC) the amount of CH 4 emitted is generally calculated according to an equation combining the amount of organic matter (OM) or volatile solids excreted, the ultimate CH 4 potential ( B0) of excreta and a system-specific methane conversion factor (MCF, %) that reflects the portion of B0 that is really converted into CH 4. The objective of the present study was to investigate the effect of the modification of dietary crude protein and fibre levels on B0 of pig slurry and on subsequent MCF according to different strategies of slurry management. Five experimental diets differing mainly in their crude protein and fibre content were compared. Two types of measurement of CH 4 emission were performed. The first was the measurement of B0 of slurry using biomethanogene potential (BMP) test. The second consisted in a storage simulation, which was performed on different kinds of effluents: fresh slurry (FSl), stored slurry (SSl), and faeces mixed with water (FaW). The type of diet and the type of effluent affected ( P dietary treatments whereas it differed for storage simulation studies with significant effects of dietary CP and fibre contents. The results from this study indicate that the type of diet has a significant but rather limited effect on B0 value of effluent. The effect of diet is much more marked on MCF, with lower values for high protein diets, and higher values for high fibre diets. MCF is also affected by manure management, the values measured on separated faeces from urine being much higher than for slurry.
Yamani, Jamila S; Lounsbury, Amanda W; Zimmerman, Julie B
2016-01-01
The potential for a chitosan-copper polymer complex to select for the target contaminants in the presence of their respective competitive ions was evaluated by synthesizing chitosan-copper beads (CCB) for the treatment of (arsenate:phosphate), (selenite:phosphate), and (selenate:sulfate). Based on work by Rhazi et al., copper (II) binds to the amine moiety on the chitosan backbone as a monodentate complex (Type I) and as a bidentate complex crosslinking two polymer chains (Type II), depending on pH and copper loading. In general, the Type I complex exists alone; however, beyond threshold conditions of pH 5.5 during synthesis and a copper loading of 0.25 mol Cu(II)/mol chitosan monomer, the Type I and Type II complexes coexist. Subsequent chelation of this chitosan-copper ligand to oxyanions results in enhanced and selective adsorption of the target contaminants in complex matrices with high background ion concentrations. With differing affinities for arsenate, selenite, and phosphate, the Type I complex favors phosphate chelation while the Type II complex favors arsenate chelation due to electrostatic considerations and selenite chelation due to steric effects. No trend was exhibited for the selenate:sulfate system possibly due to the high Ksp of the corresponding copper salts. Binary separation factors, α12, were calculated for the arsenate-phosphate and selenite-phosphate systems, supporting the mechanistic hypothesis. While, further research is needed to develop a synthesis method for the independent formation of the Type II complexes to select for target contaminants in complex matrices, this work can provide initial steps in the development of a selective adsorbent.
Al-Hinai, Mohab A; Jones, Shawn W; Papoutsakis, Eleftherios T
2014-01-01
Sporulation in the model endospore-forming organism Bacillus subtilis proceeds via the sequential and stage-specific activation of the sporulation-specific sigma factors, σ(H) (early), σ(F), σ(E), σ(G), and σ(K) (late). Here we show that the Clostridium acetobutylicum σ(K) acts both early, prior to Spo0A expression, and late, past σ(G) activation, thus departing from the B. subtilis model. The C. acetobutylicum sigK deletion (ΔsigK) mutant was unable to sporulate, and solventogenesis, the characteristic stationary-phase phenomenon for this organism, was severely diminished. Transmission electron microscopy demonstrated that the ΔsigK mutant does not develop an asymmetric septum and produces no granulose. Complementation of sigK restored sporulation and solventogenesis to wild-type levels. Spo0A and σ(G) proteins were not detectable by Western analysis, while σ(F) protein levels were significantly reduced in the ΔsigK mutant. spo0A, sigF, sigE, sigG, spoIIE, and adhE1 transcript levels were all downregulated in the ΔsigK mutant, while those of the sigH transcript were unaffected during the exponential and transitional phases of culture. These data show that σ(K) is necessary for sporulation prior to spo0A expression. Plasmid-based expression of spo0A in the ΔsigK mutant from a nonnative promoter restored solventogenesis and the production of Spo0A, σ(F), σ(E), and σ(G), but not sporulation, which was blocked past the σ(G) stage of development, thus demonstrating that σ(K) is also necessary in late sporulation. sigK is expressed very early at low levels in exponential phase but is strongly upregulated during the middle to late stationary phase. This is the first sporulation-specific sigma factor shown to have two developmentally separated roles.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Molenaar, P.C.M.; Nesselroade, J.R.
1998-01-01
The study of intraindividual variability pervades empirical inquiry in virtually all subdisciplines of psychology. The statistical analysis of multivariate time-series data - a central product of intraindividual investigations - requires special modeling techniques. The dynamic factor model (DFM), w
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Quantifying extrinsic noise in gene expression using the maximum entropy framework.
Dixit, Purushottam D
2013-06-18
We present a maximum entropy framework to separate intrinsic and extrinsic contributions to noisy gene expression solely from the profile of expression. We express the experimentally accessible probability distribution of the copy number of the gene product (mRNA or protein) by accounting for possible variations in extrinsic factors. The distribution of extrinsic factors is estimated using the maximum entropy principle. Our results show that extrinsic factors qualitatively and quantitatively affect the probability distribution of the gene product. We work out, in detail, the transcription of mRNA from a constitutively expressed promoter in Escherichia coli. We suggest that the variation in extrinsic factors may account for the observed wider-than-Poisson distribution of mRNA copy numbers. We successfully test our framework on a numerical simulation of a simple gene expression scheme that accounts for the variation in extrinsic factors. We also make falsifiable predictions, some of which are tested on previous experiments in E. coli whereas others need verification. Application of the presented framework to more complex situations is also discussed.
断裂长度与最大位移的关系及其影响因素%Relationship between fault length and maximum displacement and influenced factors
XU Shun-shan; A. F.NIETO-SAMANIEGO; LI Dong-xu
2004-01-01
断裂最大位移与断裂迹长遵循幂律关系:D=cLn, 但幂指数n的大小有很大的变化范围.为探索幂指数n的大小和断裂机制,从已发表的文献中收集了18组数据,这些数据的断裂长度具有8个数量级的跨度.经相关分析,我们得到n值的大小变化于0.55和1.65 之间,平均值为1.083 9.由于走滑断裂的最大长度在其倾向方向,不宜与倾滑断裂一起统计,我们去掉一组走向滑动断裂的数据,幂指数平均值为1.106 6.用双回归方法得到的幂指数峰值(nd)是1.0～1.1.这些结果表明断裂最大位移与断裂迹长应该是非常接近线性关系.这种线性关系可以用Dugdale 模型加以解释.该模型认为弹塑性物质拉张裂缝端点的变形是非弹性变形.模型的适用范围是单一岩性,一次构造力作用.我们认为n值的大小之所以有很大的变化范围,有可能受到断裂迹线长度偏差的影响,造成长度偏差的因素包括:不同的观察平面,断裂端点的分辨率,断裂连接作用,岩石力学性质变化,断裂多期活动等.%The relationship between maximum displacement and fault trace length obeys the power-law equation: D = cLn. Data including 18 datasets and spanning more than 8 orders of fault length magnitudes is collected from the published literature for determining the exponent value (n). The range of the calculated values of nd is from 0.55 to 1.65. The average value of nd is 1.083 9. If one dataset from strike-slip faults is precluded, the average value of nd is 1.106 6. The peak value of nd (double regression) is 1.0～1.1. These results imply that the relationship between the maximum displacement and fault trace length is approximately linear. The relationship can be explained by Dugdales model. This model explains the development of faults in a single tectonics event with homogeneous host-rock. The power-law exponent (n) for maximum displacement-trace length would be affected by the deviations of
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Bayesian Source Separation and Localization
Knuth, K H
1998-01-01
The problem of mixed signals occurs in many different contexts; one of the most familiar being acoustics. The forward problem in acoustics consists of finding the sound pressure levels at various detectors resulting from sound signals emanating from the active acoustic sources. The inverse problem consists of using the sound recorded by the detectors to separate the signals and recover the original source waveforms. In general, the inverse problem is unsolvable without additional information. This general problem is called source separation, and several techniques have been developed that utilize maximum entropy, minimum mutual information, and maximum likelihood. In previous work, it has been demonstrated that these techniques can be recast in a Bayesian framework. This paper demonstrates the power of the Bayesian approach, which provides a natural means for incorporating prior information into a source model. An algorithm is developed that utilizes information regarding both the statistics of the amplitudes...
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Particle separations by electrophoretic techniques
Ballou, N.E.; Petersen, S.L.; Ducatte, G.R.; Remcho, V.T.
1996-03-01
A new method for particle separations based on capillary electrophoresis has been developed and characterized. It uniquely separates particles according to their chemical nature. Separations have been demonstrated with chemically modified latex particles and with inorganic oxide and silicate particles. Separations have been shown both experimentally and theoretically to be essentially independent of particle size in the range of about 0.2 {mu}m to 10 {mu}m. The method has been applied to separations of U0{sub 2} particles from environmental particulate material. For this, an integrated method was developed for capillary electrophoretic separation, collection of separated fractions, and determinations of U0{sub 2} and environmental particles in each fraction. Experimental runs with the integrated method on mixtures of UO{sub 2} particles and environmental particulate material demonstrated enrichment factors of 20 for UO{sub 2} particles in respect to environmental particles in the U0{sub 2}containing fractions. This enrichment factor reduces the costs and time for processing particulate samples by the lexan process by a factor of about 20.
杨耀华; 李宗良
2014-01-01
The working principle of the disc centrifuge model LX-460 for separation of natural latex was brielfy introduced. The factors inlfuencing the separation efifciency such as regulator combination, separation time, solid content of fresh latex and centrifuge type were studied. The experimental results showed that there was an optimum length for the regulator screw when the separation efifciency was high. The efifciency could be reduced when the time was prolonged or the solid content of the latex increased. The centrifuge type also had an inlfuence on the separation efifciency. In order to improve the efifciency, it was recommended to establish standard processing parameters and separation time, control the quality of fresh latex, maintain the centrifuge in good condition, and strengthen personnel training.%简述LX-460型天然胶乳碟式离心机的工作原理，考察离心机调节器组合、分离时间、新鲜胶乳干胶浓度、离心机型号对天然胶乳离心分离效率的影响。结果表明：调节螺丝长度适当，分离效率较高；分离时间延长，分离效率降低；新鲜胶乳干胶含量越大，分离效率降低；离心机型号对分离效果有一定的影响。确定合理的工艺参数和分离时间、控制新鲜胶乳质量、良好维护离心机、加强操作人员培训是提高离心机分离效率的重要措施。
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Separated exclusive kaon production cross sections up to Q2=2.1 GeV2 and the kaon form factor
Carmignotto, Marco; Horn, Tanja
2017-01-01
Electromagnetic form factors are a key observable in probing hadronic structure, providing us with important information about underlying physical quantities related to nonperturbative QCD. Light mesons composed of a valence quark-antiquark pair can be described by a single electric form factor and have been shown to be a great laboratory for these studies. Using electroproduction experiments, a successful program was developed at Jefferson Laboratory for probing the charged pion form factor in the regime of Q2 up to 2.45 GeV2. This provided a first glimpse at a possible transition from the nonperturbative to the perturbative regime, and also information on the structure of the pion. The kaon is the next lightest existing hadron, providing an interesting channel for assessing the strangeness degree of freedom with mesons. Although the kaon is relatively unexploited to date, there are promising results from experiments of the 6 GeV era of Jefferson Laboratory with potential for kaon form factor extractions. In this talk we will present the recent analysis of the t-channel kaon cross section and discuss the relative contribution of longitudinal and transverse photons to the cross section up to Q2 values of 2.1 GeV2 and prospects for form factor extractions. Supported in part by NSF grants PHY-1306227 and PHY-1306418 and by the JSA Graduate Fellowship.
Children's separation anxiety scale (CSAS): psychometric properties.
Méndez, Xavier; Espada, José P; Orgilés, Mireia; Llavona, Luis M; García-Fernández, José M
2014-01-01
This study describes the psychometric properties of the Children's Separation Anxiety Scale (CSAS), which assesses separation anxiety symptoms in childhood. Participants in Study 1 were 1,908 schoolchildren aged between 8 and 11. Exploratory factor analysis identified four factors: worry about separation, distress from separation, opposition to separation, and calm at separation, which explained 46.91% of the variance. In Study 2, 6,016 children aged 8-11 participated. The factor model in Study 1 was validated by confirmatory factor analysis. The internal consistency (α = 0.82) and temporal stability (r = 0.83) of the instrument were good. The convergent and discriminant validity were evaluated by means of correlations with other measures of separation anxiety, childhood anxiety, depression and anger. Sensitivity of the scale was 85% and its specificity, 95%. The results support the reliability and validity of the CSAS.
Children's separation anxiety scale (CSAS: psychometric properties.
Xavier Méndez
Full Text Available This study describes the psychometric properties of the Children's Separation Anxiety Scale (CSAS, which assesses separation anxiety symptoms in childhood. Participants in Study 1 were 1,908 schoolchildren aged between 8 and 11. Exploratory factor analysis identified four factors: worry about separation, distress from separation, opposition to separation, and calm at separation, which explained 46.91% of the variance. In Study 2, 6,016 children aged 8-11 participated. The factor model in Study 1 was validated by confirmatory factor analysis. The internal consistency (α = 0.82 and temporal stability (r = 0.83 of the instrument were good. The convergent and discriminant validity were evaluated by means of correlations with other measures of separation anxiety, childhood anxiety, depression and anger. Sensitivity of the scale was 85% and its specificity, 95%. The results support the reliability and validity of the CSAS.
Jensen, Jonas Buhrkal; Birkedal, Lars
2012-01-01
, separation means physical separation. In this paper, we introduce \\emph{fictional separation logic}, which includes more general forms of fictional separating conjunctions P * Q, where "*" does not require physical separation, but may also be used in situations where the memory resources described by P and Q...
电动汽车制动能量回收最大化影响因素分析%Maximum Braking Energy Recovery of Electric Vehicles and Its Influencing Factors
王猛; 孙泽昌; 卓桂荣; 程鹏
2012-01-01
Based on an analysis of the principle and energy flow of regenerative braking, relations among braking power, regenerative braking power and braking energy recovery efficiency are revealed. Analysis results show that motor, battery and hydraulic brake system are main factors to affect braking energy recovery, of which, the influences of layout of brake pipes are specially introduced. Under ideal and typical braking condition, the potential of braking energy recovery and braking energy recovery efficiency of front-axis electric drive vehicles can be calculated, but the results are not satisfactory. A comparative study shows that dual-axis electric drive vehicles can achieve optimal performances on the potential of braking energy recovery, braking efficiency and braking energy recovery efficiency.%对再生制动的原理和能量流动进行了分析,并讲述了制动功率、再生制动功率、制动能量回收效率等之间的关系和计算方法.从分析中得出电机、蓄电池、液压制动系统是影响制动能量回收的主要因素,并重点分析了制动管路布置型式对制动能量回收的影响.针对典型的理想制动工况,计算出前轴电驱动汽车在制动能量回收方面的潜力和制动能量回收效率,但结果并不理想.通过对比发现,双轴电驱动汽车无论是在制动能量回收潜力还是在制动能量回收效率以及制动效能方面都有能力达到最优.
2014-01-01
Chromatin assembly factor-1 (CAF-1) is a three-subunit protein complex conserved throughout eukaryotes that deposits histones during DNA synthesis. Here we present a novel role for the human p150 subunit in regulating nucleolar macromolecular interactions. Acute depletion of p150 causes redistribution of multiple nucleolar proteins and reduces nucleolar association with several repetitive element–containing loci. Of note, a point mutation in a SUMO-interacting motif (SIM) within p150 abolishe...
Separation Anxiety (For Parents)
... Kids to Be Smart About Social Media Separation Anxiety KidsHealth > For Parents > Separation Anxiety Print A A ... both of you get through it. About Separation Anxiety Babies adapt pretty well to other caregivers. Parents ...
Separation Anxiety (For Parents)
... Feeding Your 1- to 2-Year-Old Separation Anxiety KidsHealth > For Parents > Separation Anxiety A A A ... both of you get through it. About Separation Anxiety Babies adapt pretty well to other caregivers. Parents ...
曲英
2011-01-01
It is the prerequisite and basis of effectively diverting waste into resource to identify the factors which have impact on resident's behavior for source separation (BSS) of household waste. Based on the Theory of Planned Behavior, this paper analyzed those factors. The results showed that behavior intention includes two dimensions. And influential factors have seven dimensions. The relationships between those factors and BOI & BEI, and between BOI, BEI and BSS were also discussed. Borrowed the method of AHP, this paper also showed the impacts of factors on behavior of source separation.%识别影响我国城市居民生活垃圾源头分类行为的影响因素是确保生活垃圾源头分类有效实施的基础和前提.本文以计划行为理论为基础,运用实证研究方法,分析了影响我国城市居民生活垃圾源头分类行为的影响因素,研究结果表明源头分类行为意向包含2个维度,影响因素包含7个维度,且借鉴层次分析方法解释说明了影响因素对源头分类行为的影响力.研究结论可以为城市市政相关部门制定政策和促进居民实施源头分类提供指导和借鉴.
Zhiyue Li; Qun Zhao; Ran Bi; Yong Zhuang; Siyin Feng
2011-01-01
Previous studies of nerve conduits have investigated numerous properties, such as conduit luminal structure and neurotrophic factor incorporation, for the regeneration of nerve defects. The present study used a poly(lactic-co-glycolic acid) (PLGA) copolymer to construct a three-dimensional (3D) bionic nerve conduit, with two channels and multiple microtubule lumens, and incorporating two neurotrophic factors, each with their own delivery system, as a novel environment for peripheral nerve regeneration. The efficacy of this conduit in repairing a 1.5 cm sciatic nerve defect was compared with PLGA-alone and PLGA-microfilament conduits, and autologous nerve transplantation. Results showed that compared with the other groups, the 3D bionic nerve conduit had the fastest nerve conduction velocity, largest electromyogram amplitude, and shortest electromyogram latency. In addition, the nerve fiber density, myelin sheath thickness and axon diameter were significantly increased, and the recovery rate of the triceps surae muscle wet weight was lowest. These findings suggest that 3D bionic nerve conduits can provide a suitable microenvironment for peripheral nerve regeneration to efficiently repair sciatic nerve defects.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Controlling Separation in Turbomachines
Evans, Simon; Himmel, Christoph; Power, Bronwyn; Wakelam, Christian; Xu, Liping; Hynes, Tom; Hodson, Howard
2010-01-01
Four examples of flow control: 1) Passive control of LP turbine blades (Laminar separation control). 2) Aspiration of a conventional axial compressor blade (Turbulent separation control). 3) Compressor blade designed for aspiration (Turbulent separation control). 4.Control of intakes in crosswinds (Turbulent separation control).
Basic separative power of multi-component isotopes separation in a gas centrifuge
Jiang, Hongmin; Lei, Zengguang; Zhuge, Fu [Institute of Physical and Chemical Engineering, Tianjin (China)
2008-07-01
On condition that the overall separation factor per unit exists in centrifuge for multi-component isotopes separation, the relations between separative power of each component and molecular weight have been investigated in the paper while the value function and the separative power of binary-component separation are adopted. The separative power of each component is proportional to the square of the molecular weight difference between its molecular weight and the average molecular weight of other remnant components. In addition, these relations are independent on the number of the components and feed concentrations. The basic separative power and related expressions, suggested in the paper, can be used for estimating the separative power of each component and analyzing the separation characteristics. The most valuable application of the basic separative power is to evaluate the separative capacity of centrifuge for multi-component isotopes. (author)
Kernel maximum autocorrelation factor and minimum noise fraction transformations
Nielsen, Allan Aasbjerg
2010-01-01
) dimensional feature space via the kernel function and then performing a linear analysis in that space. Three examples show the very successful application of kernel MAF/MNF analysis to 1) change detection in DLR 3K camera data recorded 0.7 seconds apart over a busy motorway, 2) change detection...
A relationship between maximum packing of particles and particle size
Fedors, R. F.
1979-01-01
Experimental data indicate that the volume fraction of particles in a packed bed (i.e. maximum packing) depends on particle size. One explanation for this is based on the idea that particle adhesion is the primary factor. In this paper, however, it is shown that entrainment and immobilization of liquid by the particles can also account for the facts.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
A tropospheric ozone maximum over the equatorial Southern Indian Ocean
L. Zhang
2012-05-01
Full Text Available We examine the distribution of tropical tropospheric ozone (O_{3} from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O_{3} during 2005 to 2009 reveal a distinct, persistent O_{3} maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O_{3} observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O_{3} maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O_{3} maximum is dominated by the O_{3} production driven by lightning nitrogen oxides (NO_{x} emissions, which accounts for 62% of the tropospheric column O_{3} in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O_{3} maximum are rather small. The O_{3} productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O_{3} maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O_{3} maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.
Jan Werner; Eva Maria Griebeler
2014-01-01
We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which...
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Blasco, Francisco Lazaro
2011-01-01
A novel fountain coding scheme has been introduced. The scheme consists of a parallel concatenation of a MDS block code with a LRFC code, both constructed over the same field, $F_q$. The performance of the concatenated fountain coding scheme has been analyzed through derivation of tight bounds on the probability of decoding failure as a function of the overhead. It has been shown how the concatenated scheme performs as well as LRFC codes in channels characterized by high erasure probabilities, whereas they provide failure probabilities lower by several orders of magnitude at moderate/low erasure probabilities.
Separation anxiety in children
... page: //medlineplus.gov/ency/article/001542.htm Separation anxiety in children To use the sharing features on this page, please enable JavaScript. Separation anxiety in children is a developmental stage in which ...
Ionene membrane battery separator
Moacanin, J.; Tom, H. Y.
1969-01-01
Ionic transport characteristics of ionenes, insoluble membranes from soluble polyelectrolyte compositions, are studied for possible application in a battery separator. Effectiveness of the thin film of separator membrane essentially determines battery lifetime.
Nath, Pulak; Twary, Scott N.
2016-04-26
Described herein are methods and systems for harvesting, collecting, separating and/or dewatering algae using iron based salts combined with a magnetic field gradient to separate algae from an aqueous solution.
幼儿“分离焦虑”的原因分析及缓解措施探讨%The Study of Factors and Measures of Children＇s Separation Anxiety
史丹丹; 滕春燕
2011-01-01
Children will be in state of separation anxiety in the beginning of going to children＇garden. If it＇s serious, it will reduce the effects of children＇s intelligence activities and affect the ability of creativity and social adaptation. We analyze three facts of separation anxiety＇factor, and confer the measures of children＇s garden and parents according to the characteristics of children＇development.%幼儿入园初期会出现“分离焦虑”现象。早期幼儿的‘盼离焦虑”如果比较严重，会降低幼儿智力活动的效果，甚至会影响将来的创造力以及社会的适应能力。笔者对从幼儿“分离焦虑”的形成原因做了三方面的分析，并根据幼儿的身心发展等特点，结合笔者在实际工作中遇到幼儿的“分离焦虑”现象，探讨了缓解幼儿“分离焦虑”的方法和策略。
Saarinen, Juha J.; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Evans, Alistair R.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Sibly, Richard M.; Stephens, Patrick R.; Theodor, Jessica; Uhen, Mark D.; Smith, Felisa A.
2014-01-01
There is accumulating evidence that macroevolutionary patterns of mammal evolution during the Cenozoic follow similar trajectories on different continents. This would suggest that such patterns are strongly determined by global abiotic factors, such as climate, or by basic eco-evolutionary processes such as filling of niches by specialization. The similarity of pattern would be expected to extend to the history of individual clades. Here, we investigate the temporal distribution of maximum size observed within individual orders globally and on separate continents. While the maximum size of individual orders of large land mammals show differences and comprise several families, the times at which orders reach their maximum size over time show strong congruence, peaking in the Middle Eocene, the Oligocene and the Plio-Pleistocene. The Eocene peak occurs when global temperature and land mammal diversity are high and is best explained as a result of niche expansion rather than abiotic forcing. Since the Eocene, there is a significant correlation between maximum size frequency and global temperature proxy. The Oligocene peak is not statistically significant and may in part be due to sampling issues. The peak in the Plio-Pleistocene occurs when global temperature and land mammal diversity are low, it is statistically the most robust one and it is best explained by global cooling. We conclude that the macroevolutionary patterns observed are a result of the interplay between eco-evolutionary processes and abiotic forcing. PMID:24741007
Saarinen, Juha J; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D; Smith, Felisa A
2014-06-07
There is accumulating evidence that macroevolutionary patterns of mammal evolution during the Cenozoic follow similar trajectories on different continents. This would suggest that such patterns are strongly determined by global abiotic factors, such as climate, or by basic eco-evolutionary processes such as filling of niches by specialization. The similarity of pattern would be expected to extend to the history of individual clades. Here, we investigate the temporal distribution of maximum size observed within individual orders globally and on separate continents. While the maximum size of individual orders of large land mammals show differences and comprise several families, the times at which orders reach their maximum size over time show strong congruence, peaking in the Middle Eocene, the Oligocene and the Plio-Pleistocene. The Eocene peak occurs when global temperature and land mammal diversity are high and is best explained as a result of niche expansion rather than abiotic forcing. Since the Eocene, there is a significant correlation between maximum size frequency and global temperature proxy. The Oligocene peak is not statistically significant and may in part be due to sampling issues. The peak in the Plio-Pleistocene occurs when global temperature and land mammal diversity are low, it is statistically the most robust one and it is best explained by global cooling. We conclude that the macroevolutionary patterns observed are a result of the interplay between eco-evolutionary processes and abiotic forcing.
Separation and confirmation of showers
Neslušan, L.; Hajduková, M.
2017-01-01
Aims: Using IAU MDC photographic, IAU MDC CAMS video, SonotaCo video, and EDMOND video databases, we aim to separate all provable annual meteor showers from each of these databases. We intend to reveal the problems inherent in this procedure and answer the question whether the databases are complete and the methods of separation used are reliable. We aim to evaluate the statistical significance of each separated shower. In this respect, we intend to give a list of reliably separated showers rather than a list of the maximum possible number of showers. Methods: To separate the showers, we simultaneously used two methods. The use of two methods enables us to compare their results, and this can indicate the reliability of the methods. To evaluate the statistical significance, we suggest a new method based on the ideas of the break-point method. Results: We give a compilation of the showers from all four databases using both methods. Using the first (second) method, we separated 107 (133) showers, which are in at least one of the databases used. These relatively low numbers are a consequence of discarding any candidate shower with a poor statistical significance. Most of the separated showers were identified as meteor showers from the IAU MDC list of all showers. Many of them were identified as several of the showers in the list. This proves that many showers have been named multiple times with different names. Conclusions: At present, a prevailing share of existing annual showers can be found in the data and confirmed when we use a combination of results from large databases. However, to gain a complete list of showers, we need more-complete meteor databases than the most extensive databases currently are. We also still need a more sophisticated method to separate showers and evaluate their statistical significance. Tables A.1 and A.2 are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc
Van Kooy, L.; Mooij, M.; Rem, P.
2004-01-01
Separations by density, such as the separation of non-ferrous scrap into light and heavy alloys, are often realized by means of heavy media. In principle, kinetic gravity separations in water can be faster and cheaper, because they do not rely on suspensions or salt solutions of which the density
Hierarchical Maximum Margin Learning for Multi-Class Classification
Yang, Jian-Bo
2012-01-01
Due to myriads of classes, designing accurate and efficient classifiers becomes very challenging for multi-class classification. Recent research has shown that class structure learning can greatly facilitate multi-class learning. In this paper, we propose a novel method to learn the class structure for multi-class classification problems. The class structure is assumed to be a binary hierarchical tree. To learn such a tree, we propose a maximum separating margin method to determine the child nodes of any internal node. The proposed method ensures that two classgroups represented by any two sibling nodes are most separable. In the experiments, we evaluate the accuracy and efficiency of the proposed method over other multi-class classification methods on real world large-scale problems. The results show that the proposed method outperforms benchmark methods in terms of accuracy for most datasets and performs comparably with other class structure learning methods in terms of efficiency for all datasets.
金建君; FRANCISCO Jamil Paolo; SPOANN Vin; BUDDHAWONGSA Piyaluk; 马骅
2016-01-01
本文的研究目的在于分析背景因素和最大价格信息对BDM拍卖支付意愿的影响。为了分析农村居民对饮用水水质改善的支付意愿，本文选取便携式水过滤器为BDM的拍卖物。来自中国、柬埔寨和菲律宾的354名随机样本参与了本研究的实验，其中有一半的样本来自于家里有自来水的家庭，另一半来自于家里没有自来水的家庭。样本再被随机地分成两组：一组样本被告知拍卖物的最高价格信息，另一组样本不被提供拍卖物的最高价格信息。研究结果表明：被提供拍卖物最高价格信息的样本给出的平均支付意愿高于不被提供最高价格信息样本的平均支付意愿，但二者之间没有显著差异。参与者的背景因素(家庭是否有自来水供给)和其他社会经济因素(收入、年龄和教育水平等)对参与者的支付意愿有显著的影响。没有自来水供给的家庭比有自来水供给的家庭对饮用水水质改善有较高的支付意愿；收入较高和受教育水平较高的家庭对饮用水水质改善有较高的需求；菲律宾和柬埔寨的农村居民对便携式水过滤器的需求明显高于中国的农村居民。%The purpose of this study was to measure the effect of maximum price information and contextual fac-tors on people’s bidding behaviors in a controled Becker-DeGroot-Marschak (BDM) experimental auctions. 354 respondents from three Asian countries (China, Cambodia and the Philippines) participated in this study. In each country, both households with piped water connection and households without piped water connection were inves-tigated. The sample in each country was then randomly assigned to two groups: one group was provided with a maximum price of a water filter and the other group was not provided with the maximum price information. The re-sults show that the treatment group with maximum price information had a higher actual wilingness
Foam separation of chromium (Ⅵ) from aqueous solution
JIAO Cai-shan; DING Yan
2009-01-01
Removal of chromium (Ⅵ) dissolved in water by intermittent foam separation was implemented with cetyl trimethy-ammonium bromide as surfactant. The influence of various factors on removal efficiency was systematically studied. The removal efficiency has a maximum value near pH 4.0; thus, most experiments were carried out at pH 4.0. The orthogonal experiment was conducted to confirm the optimal operating parameters. The orthogonal experimental results show that when the liquid feed concentration is 10 mg/L, the pH value of feed solution is 4.00, air flow rates 0.9 L/min, surfactant dosage is 300 mg/L, the maximum removal efficiency of chromium (Ⅵ) reaches 97.80%, and condense multiple reaches 1711. The kinetic test indicates that the foam separation of chromium is a first-order process. The equivalent rate constant calculated from the slope is 0.406 4, and the equivalent rate equation is obtained.
男男性行为者艾滋病知识与行为分离的影响因素分析%Impact factor analysis on separation of AIDS knowledge and behavior in MSM
王毅; 李六林; 张光贵; 樊静; 赵西和; 贾蜀光; 周力; 龙星
2014-01-01
Objective To understand the status of separation of AIDS knowledge and behavior in men who have sex with men (MSM) and analyze its impact factor.Method Questionnaires and serology sampling were employed by snowball sampling from September 2011 to March 2012 at Mianyang city of Sichuan Province.Self-designed questionnaires were used in a face-to-face way proceeded by the professionals on anonymous behavioral survey within 405 respondents.Result In 405 respondents,the awareness rate of AIDS knowledge was 98.0％ (397/405) and the usage rate of condoms in every anal sex among MSM for recent 6 months was 56.6％ (201/355) and separation of knowledge and behavior was 43.7％ (152/348).5.1％ (10/196) indicates no separation of knowledge and behavior (x2 =1.127,P＞0.05).However,HIV positive rate reached 7.9％ (12/152) when knowledge and behavior separated (x2=1.127,P＞0.05).Via Multi-factor analysis the independent impact factors for separation of knowledge and behavior follow as before:Teachers/cadres/other occupations (OR =0.328,95％ CI:0.143-0.755),knowledge from radio (OR =6.062,95％ CI:1.656-22.184),knowledge from internet (OR =2.747,95％ CI:1.339-5.637),risk Perception of HIV infection (OR =4.184,95％ CI:1.354-12.923),sex role as versatile (OR =2.161,95％ CI:0.040-4.489),no habit for using condoms (OR =2.750,95％ CI:1.476-5.122),anal sex times ≥2 in recent 1 week (OR =2.631,95％ CI:1.115-6.209),promiscuity when having boyfriend(OR =0.430,95％ CI:1.149-4.717).Conclusion Separation of knowledge and behavior exists and which is affected by demographic characteristics,sex roles,ways of knowledge acception and sex behavior,etc.In AIDS prevention and control continuous enhancement on behavioral intervention of impact factor of separation of knowledge and behavior.%目的 了解男男性行为者(MSM)艾滋病知识与行为分离现状,分析影响因素.方法 2011年9月至2012年3月在四川省绵阳市采用滚雪球采样法,采
Trautmann, N
1976-01-01
A survey is given on the progress of fast chemical separation procedures during the last few years. Fast, discontinuous separation techniques are illustrated by a procedure for niobium. The use of such techniques for the chemical characterization of the heaviest known elements is described. Other rapid separation methods from aqueous solutions are summarized. The application of the high speed liquid chromatography to the separation of chemically similar elements is outlined. The use of the gas jet recoil transport method for nuclear reaction products and its combination with a continuous solvent extraction technique and with a thermochromatographic separation is presented. Different separation methods in the gas phase are briefly discussed and the attachment of a thermochromatographic technique to an on-line mass separator is shown. (45 refs).
Acoustofluidic bacteria separation
Li, Sixing; Ma, Fen; Bachman, Hunter; Cameron, Craig E.; Zeng, Xiangqun; Huang, Tony Jun
2017-01-01
Bacterial separation from human blood samples can help with the identification of pathogenic bacteria for sepsis diagnosis. In this work, we report an acoustofluidic device for label-free bacterial separation from human blood samples. In particular, we exploit the acoustic radiation force generated from a tilted-angle standing surface acoustic wave (taSSAW) field to separate Escherichia coli from human blood cells based on their size difference. Flow cytometry analysis of the E. coli separated from red blood cells shows a purity of more than 96%. Moreover, the label-free electrochemical detection of the separated E. coli displays reduced non-specific signals due to the removal of blood cells. Our acoustofluidic bacterial separation platform has advantages such as label-free separation, high biocompatibility, flexibility, low cost, miniaturization, automation, and ease of in-line integration. The platform can be incorporated with an on-chip sensor to realize a point-of-care sepsis diagnostic device.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)
2009-02-19
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic
Roof separation characteristics of laminated weak roof strata of longwall roadway
LU Ting-kan; LIU Yu-zhou
2004-01-01
The roof separation was investigated in a coal mine as part of the site characterization of roof strata deterioration in a longwall roadway. The separation of laminated,weak roof strata was initially characterized as the maximum separation, effect of geological setting on separation and the effect of mining activities (heading development,time-dependent and longwall extraction) on separation. Then the separation process was studied, so as to answer the questions of: when the separation occurs; where the separation is located and what geological setting it relates to; how large of the separation is; and how the separation propagates.
Chang, Paul K
2014-01-01
Interdisciplinary and Advanced Topics in Science and Engineering, Volume 3: Separation of Flow presents the problem of the separation of fluid flow. This book provides information covering the fields of basic physical processes, analyses, and experiments concerning flow separation.Organized into 12 chapters, this volume begins with an overview of the flow separation on the body surface as discusses in various classical examples. This text then examines the analytical and experimental results of the laminar boundary layer of steady, two-dimensional flows in the subsonic speed range. Other chapt
The separation of adult separation anxiety disorder.
Baldwin, David S; Gordon, Robert; Abelli, Marianna; Pini, Stefano
2016-08-01
The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) categorization of mental disorders places "separation anxiety disorder" within the broad group of anxiety disorders, and its diagnosis no longer rests on establishing an onset during childhood or adolescence. In previous editions of DSM, it was included within the disorders usually first diagnosed in infancy, childhood, or adolescence, with the requirement for an onset of symptoms before the age of 18 years: symptomatic adults could only receive a retrospective diagnosis, based on establishing this early onset. The new position of separation anxiety disorder is based upon the findings of epidemiological studies that revealed the unexpectedly high prevalence of the condition in adults, often in individuals with an onset of symptoms after the teenage years; its prominent place within the DSM-5 group of anxiety disorders should encourage further research into its epidemiology, etiology, and treatment. This review examines the clinical features and boundaries of the condition, and offers guidance on how it can be distinguished from other anxiety disorders and other mental disorders in which "separation anxiety" may be apparent.
Separating a water-propanol mixture using PDMS pervaporation membranes
Mahacine Amrani
2010-04-01
Full Text Available Recovering and purifying organic solvents during chemical and pharmaceutical synthesis has great economic and environmental importance. Water-alcohol mixture pervaporation was investigated using a pervaporation cell and hy-drophobic membranes. This work studied polydimethylsiloxane (PDMS membrane performance and hydrophobic membranes for removing propanol from aqueous mixtures. PDMS is recognised as being alcohol permselective du-ring pervaporation. It was also observed that water was transferred through a hydrophobic membrane as water’s molecular size is smaller than that of propanol. A laboratory-scale pervaporation unit was used for studying this membrane’s separation characteristics in terms of pervaporation flux and selectivity for feeds containing up to water mass and 30°C-50°C. Total propanol/water flux was observed to vary as operating temperature increased. Although PDMS membranes presented good characteristics for separating water/propanol mixtures, the separation factor and pervaporation flow decreased as water content in the feed increased. The tested membrane was found to be very e-fficient for water concentrations of less than 0.3, corresponding to total flux transfer maximum.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Nauta, M.H.; Emmelkamp, P.M.G.; Sturmey, P.; Hersen, M.
2012-01-01
Separation anxiety disorder (SAD) is the only anxiety disorder that is specific to childhood; however, SAD has hardly ever been addressed as a separate disorder in clinical trials investigating treatment outcome. So far, only parent training has been developed specifically for SAD. This particular t
Mineka, Susan; Suomi, Stephen J.
1978-01-01
Reviews phenomena associated with social separation from attachment objects in nonhuman primates. Evaluates four theoretical treatments of separation in light of existing data: Bowlby's attachment-object-loss theory, Kaufman's conservation-withdrawal theory, Seligman's learned helplessness theory, and Solomon and Corbit's opponent-process theory.…
Nonterminal Separating Macro Grammars
Hogendorp, Jan Anne; Asveld, P.R.J.; Nijholt, A.; Verbeek, Leo A.M.
1987-01-01
We extend the concept of nonterminal separating (or NTS) context-free grammar to nonterminal separating $m$-macro grammar where the mode of derivation $m$ is equal to "unrestricted". "outside-in' or "inside-out". Then we show some (partial) characterization results for these NTS $m$-macro grammars.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Enhanced separation of rare earth elements
Lyon, K. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Greenhalgh, M. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Herbst, R. S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Garn, T. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Welty, A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Soderstrom, M. D. [Cytec Solvay Group, Tempe, AZ (United States); Jakovljevic, B. [Cytec Solvay Group, Niagara Falls, ON (Canada)
2016-09-01
Industrial rare earth separation processes utilize PC88A, a phosphonic acid ligand, for solvent extraction separations. The separation factors of the individual rare earths, the equipment requirements, and chemical usage for these flowsheets are well characterized. Alternative ligands such as Cyanex® 572 and the associated flowsheets are being investigated at the pilot scale level to determine if significant improvements to the current separation processes can be realized. These improvements are identified as higher separation factors, reduced stage requirements, or reduced chemical consumption. Any of these improvements can significantly affect the costs associated with these challenging separation proccesses. A mid/heavy rare earth element (REE) separations flowsheet was developed and tested for each ligand in a 30 stage mixer-settler circuit to compare the separation performance of PC88A and Cyanex® 572. The ligand-metal complex strength of Cyanex® 572 provides efficient extraction of REE while significantly reducing the strip acid requirements. Reductions in chemical consumption have a significant impact on process economics for REE separations. Partitioning results summarized Table 1 indicate that Cyanex® 572 offers the same separation performance as PC88A while reducing acid consumption by 30% in the strip section for the mid/heavy REE separation. Flowsheet Effluent Compositions PC88A Cyanex® 572 Raffinate Mid REE Heavy REE 99.40% 0.60% 99.40% 0.60% Rich Mid REE Heavy REE 2.20% 97.80% 0.80% 99.20% Liquor Strip Acid Required 3.4 M 2.3 M Table 1 – Flowsheet results comparing separation performance of PC88A and Cyanex® 572 for a mid/heavy REE separation.
Chromatographic Separation of Glucose and Fructose
Kuptsevich, Yu E.; Larionov, Oleg G.; Stal'naya, I. D.; Nakhapetyan, L. A.; Pronin, A. Ya
1987-03-01
The structures, mutarotation, and the physicochemical properties of glucose and fructose as well as methods for their separation are examined. Their chromatographic separation on cation exchangers in the calcium-form is discussed in detail. A theory of the formation of complexes of carbohydrates with metal cations is described and the mechanism of the separation of glucose and fructose on cation exchangers in the calcium-form is discussed in detail. Factors influencing the chromatographic separation of glucose and fructose on sulphonic acid cation-exchange resins are also considered. The bibliography includes 138 references.
Spiral microfluidic nanoparticle separators
Bhagat, Ali Asgar S.; Kuntaegowdanahalli, Sathyakumar S.; Dionysiou, Dionysios D.; Papautsky, Ian
2008-02-01
Nanoparticles have potential applications in many areas such as consumer products, health care, electronics, energy and other industries. As the use of nanoparticles in manufacturing increases, we anticipate a growing need to detect and measure particles of nanometer scale dimensions in fluids to control emissions of possible toxic nanoparticles. At present most particle separation techniques are based on membrane assisted filtering schemes. Unfortunately their efficiency is limited by the membrane pore size, making them inefficient for separating a wide range of sizes. In this paper, we propose a passive spiral microfluidic geometry for momentum-based particle separations. The proposed design is versatile and is capable of separating particulate mixtures over a wide dynamic range and we expect it will enable a variety of environmental, medical, or manufacturing applications that involve rapid separation of nanoparticles in real-world samples with a wide range of particle components.
Bernauer, Jan C.
2010-09-24
The electromagnetic form factors of the proton are fundamental quantities sensitive to the distribution of charge and magnetization inside the proton. Precise knowledge of the form factors, in particular of the charge and magnetization radii provide strong tests for theory in the non-perturbative regime of QCD. However, the existing data at Q{sup 2} below 1 (GeV/c){sup 2} are not precise enough for a hard test of theoretical predictions. For a more precise determination of the form factors, within this work more than 1400 cross sections of the reaction H(e, e{sup '})p were measured at the Mainz Microtron MAMI using the 3-spectrometer-facility of the A1-collaboration. The data were taken in three periods in the years 2006 and 2007 using beam energies of 180, 315, 450, 585, 720 and 855 MeV. They cover the Q{sup 2} region from 0.004 to 1 (GeV/c){sup 2} with counting rate uncertainties below 0.2% for most of the data points. The relative luminosity of the measurements was determined using one of the spectrometers as a luminosity monitor. The overlapping acceptances of the measurements maximize the internal redundancy of the data and allow, together with several additions to the standard experimental setup, for tight control of systematic uncertainties. To account for the radiative processes, an event generator was developed and implemented in the simulation package of the analysis software which works without peaking approximation by explicitly calculating the Bethe-Heitler and Born Feynman diagrams for each event. To separate the form factors and to determine the radii, the data were analyzed by fitting a wide selection of form factor models directly to the measured cross sections. These fits also determined the absolute normalization of the different data subsets. The validity of this method was tested with extensive simulations. The results were compared to an extraction via the standard Rosenbluth technique. The dip structure in G{sub E} that was seen in the
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Direct maximum parsimony phylogeny reconstruction from genotype data
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
Supercritical fluid reverse micelle separation
Fulton, John L.; Smith, Richard D.
1993-01-01
A method of separating solute material from a polar fluid in a first polar fluid phase is provided. The method comprises combining a polar fluid, a second fluid that is a gas at standard temperature and pressure and has a critical density, and a surfactant. The solute material is dissolved in the polar fluid to define the first polar fluid phase. The combined polar and second fluids, surfactant, and solute material dissolved in the polar fluid is maintained under near critical or supercritical temperature and pressure conditions such that the density of the second fluid exceeds the critical density thereof. In this way, a reverse micelle system defining a reverse micelle solvent is formed which comprises a continuous phase in the second fluid and a plurality of reverse micelles dispersed in the continuous phase. The solute material is dissolved in the polar fluid and is in chemical equilibrium with the reverse micelles. The first polar fluid phase and the continuous phase are immiscible. The reverse micelles each comprise a dynamic aggregate of surfactant molecules surrounding a core of the polar fluid. The reverse micelle solvent has a polar fluid-to-surfactant molar ratio W, which can vary over a range having a maximum ratio W.sub.o that determines the maximum size of the reverse micelles. The maximum ratio W.sub.o of the reverse micelle solvent is then varied, and the solute material from the first polar fluid phase is transported into the reverse micelles in the continuous phase at an extraction efficiency determined by the critical or supercritical conditions.
Supercritical fluid reverse micelle separation
Fulton, J.L.; Smith, R.D.
1993-11-30
A method of separating solute material from a polar fluid in a first polar fluid phase is provided. The method comprises combining a polar fluid, a second fluid that is a gas at standard temperature and pressure and has a critical density, and a surfactant. The solute material is dissolved in the polar fluid to define the first polar fluid phase. The combined polar and second fluids, surfactant, and solute material dissolved in the polar fluid is maintained under near critical or supercritical temperature and pressure conditions such that the density of the second fluid exceeds the critical density thereof. In this way, a reverse micelle system defining a reverse micelle solvent is formed which comprises a continuous phase in the second fluid and a plurality of reverse micelles dispersed in the continuous phase. The solute material is dissolved in the polar fluid and is in chemical equilibrium with the reverse micelles. The first polar fluid phase and the continuous phase are immiscible. The reverse micelles each comprise a dynamic aggregate of surfactant molecules surrounding a core of the polar fluid. The reverse micelle solvent has a polar fluid-to-surfactant molar ratio W, which can vary over a range having a maximum ratio W[sub o] that determines the maximum size of the reverse micelles. The maximum ratio W[sub o] of the reverse micelle solvent is then varied, and the solute material from the first polar fluid phase is transported into the reverse micelles in the continuous phase at an extraction efficiency determined by the critical or supercritical conditions. 27 figures.
Proposal of a New Hf(IV)/Zr(IV)Separation System by the Solvent Extraction Method
SABERYAN Kamal; MEYSAMI Amir Hosein; RASHCHI Fereshteh; ZOLFONOUN Ehsan
2008-01-01
A liquid-liquid extraction study has been conducted to separate hafnium from zirconium,using Cyanex 301 in kerosene.Noticeably,it is the first time that Cyanex 301 is utilized to separate Hf(Ⅳ)from Zr(Ⅳ).In this series of experiments,several parameters influencing the separation have been investigated,such as the initial pH,the extractant concentration,the metal ion concentration,the temperature,the type of the diluents and the salt addition.Regarding the aging of the Zr(Ⅳ)and Hf(Ⅳ)solutions,the solutions with a maximum 3 d aging time could be used with no difficulties.It was observed that the initial pH increase caused an increase in the Zr(Ⅳ)/Hf(Ⅳ)separation factor.Moreover,the distribution decreased with the temperature increase,suggesting that the reaction is exothermic.In agreement with the resulting data,the optimum separation factor illustrates the value of 7 at a pH of 4.00 in the presence of NaCI as an added salt.The attractive characteristics of the presently designed method are the use of low acidic nitrate solutions,the lack of using thiocyanate and a higher extractability of hafnium-Cyanex 301 relative to zirconium-Cyanex 301 complexes.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Ceramic membranes for high temperature hydrogen separation
Fain, D.E.; Roettger, G.E. [Oak Ridge K-25 Site, TN (United States)
1996-08-01
Ceramic gas separation membranes can provide very high separation factors if the pore size is sufficiently small to separate gas molecules by molecular sieving and if oversized pores are adequately limited. Ceramic membranes typically have some pores that are substantially larger than the mean pore size and that should be regarded as defects. To assess the effects of such defects on the performance of ceramic membranes, a simple mathematical model has been developed to describe flow through a gas separation membrane that has a primary mode of flow through very small pores but that has a secondary mode of flow through undesirably large pores. This model permits separation factors to be calculated for a specified gas pair as a function of the molecular weights and molecular diameters of the gases, the membrane pore diameter, and the diameter and number of defects. This model will be described, and key results from the model will be presented. The separation factors of the authors membranes continue to be determined using a permeance test system that measures flows of pure gases through a membrane at temperatures up to 275{degrees}C. A primary goal of this project for FY 1996 is to develop a mixed gas separation system for measuring the separation efficiency of membranes at higher temperatures. Performance criteria have been established for the planned mixed gas separation system and design of the system has been completed. The test system is designed to measure the separation efficiency of membranes at temperatures up to 600{degrees}C and pressures up to 100 psi by separating the constituents of a gas mixture containing hydrogen. The system will accommodate the authors typical experimental membrane that is tubular and has a diameter of about 9 mm and a length of about 23 cm. The design of the new test system and its expected performance will be discussed.
Hydrograph separation using stable isotopes: Review and evaluation
Klaus, J.; McDonnell, J. J.
2013-11-01
We reviewed isotope hydrograph separation studies.We examine methods, applications, and limitations.We summarize factors that control the event/pre-event water contributions.We outline new possible research avenues in isotope hydrograph separation.
Extraction and separation of proteoglycans.
Yanagishita, Masaki; Podyma-Inoue, Katarzyna Anna; Yokoyama, Miki
2009-11-01
Proteoglycans contain a unique carbohydrate component, glycosaminoglycan, which consists of repeating, typically sulfated disaccharides, and is capable of interacting with diverse molecules. Specific, clustered arrangements of sulfate on the glycosaminoglycan backbone form binding sites for many biologically important ligands such as extracellular matrix molecules and growth factors. Core proteins of proteoglycans also show molecular interactions necessary for organizing scaffolds in the extracellular matrix or for anchoring proteoglycans to the plasma membrane. Experimental protocols aiming at extracting maximal amounts of proteoglycans from tissues or cells require disruption of molecular interactions involving proteoglycans by denaturing solvents. Among many of the proteoglycan separation procedures, anion exchange chromatography, which takes advantage of the presence of highly negatively charged glycosaminoglycans in all proteoglycans, serves one of the most convenient general separation techniques.
Gheshlaghi, R; Scharer, J M; Moo-Young, M; Douglas, P L
2008-12-01
Modified resolution and overall separation factors used to quantify the separation of complex chromatography systems are described. These factors were proven to be applicable to the optimization of amino acid resolution in reverse-phase (RP) HPLC chromatograms. To optimize precolumn derivatization with phenylisothiocyanate, a 2(5-1) fractional factorial design in triplicate was employed. The five independent variables for optimizing the overall separation factor were triethylamine content of the aqueous buffer, pH of the aqueous buffer, separation temperature, methanol/acetonitrile concentration ratio in the organic eluant, and mobile phase flow rate. Of these, triethylamine concentration and methanol/acetonitrile concentration ratio were the most important. The methodology captured the interaction between variables. Temperature appeared in the interaction terms; consequently, it was included in the hierarchic model. The preliminary model based on the factorial experiments was not able to explain the response curvature in the design space; therefore, a central composite design was used to provide a quadratic model. Constrained nonlinear programming was used for optimization purposes. The quadratic model predicted the optimal levels of the variables. In this study, the best levels of the five independent variables that provide the maximum modified resolution for each pair of consecutive amino acids appearing in the chromatograph were determined. These results are of utmost importance for accurate analysis of a subset of amino acids.
STUDY ON MAXIMUM SPECIFIC SLUDGE ACIVITY OF DIFFERENT ANAEROBIC GRANULAR SLUDGE BY BATCH TESTS
无
2001-01-01
The maximum specific sludge activity of granular sludge from large-scale UASB, IC and Biobed anaerobic reactors were investigated by batch tests. The limitation factors related to maximum specific sludge activity (diffusion, substrate sort, substrate concentration and granular size) were studied. The general principle and procedure for the precise measurement of maximum specific sludge activity were suggested. The potential capacity of loading rate of the IC and Biobed anaerobic reactors were analyzed and compared by use of the batch tests results.
Radiochemical separation of Cobalt
Erkelens, P.C. van
1961-01-01
A method is described for the radiochemical separation of cobalt based on the extraordinary stability of cobalt diethyldithiocarbamate. Interferences are few; only very small amounts of zinc and iron accompany cobalt, which is important in neutron-activation analysis.
Dai, Liang; Schmidt, Fabian
2015-01-01
The separate universe conjecture states that in General Relativity a density perturbation behaves locally (i.e. on scales much smaller than the wavelength of the mode) as a separate universe with different background density and curvature. We prove this conjecture for a spherical compensated tophat density perturbation of arbitrary amplitude and radius in $\\Lambda$CDM. We then use Conformal Fermi Coordinates to generalize this result to scalar perturbations of arbitrary configuration and scale in a general cosmology with a mixture of fluids, but to linear order in perturbations. In this case, the separate universe conjecture holds for the isotropic part of the perturbations. The anisotropic part on the other hand is exactly captured by a tidal field in the Newtonian form. We show that the separate universe picture is restricted to scales larger than the sound horizons of all fluid components. We then derive an expression for the locally measured matter bispectrum induced by a long-wavelength mode of arbitrary...
Electroextraction separation of dyestuffs
Luo, G.S.; Yu, M.J.; Jiang, W.B.; Zhu, S.L.; Dai, Y.Y. [Tsinghua Univ., Beijing (China). Dept. of Chemical Engineering
1999-03-01
Electroseparation technologies have prospects for significant growth well into the next century. Electroextraction, a coupled separation technique of solvent extraction with electrophoresis, was used to remove dyestuffs from their aqueous stream. A study on the characteristics of the separation technique was carried out with n-butanol/acid-chrom blue K/water and n-butanol/methyl blue/water as working systems. A continuous separation equipment was designed and sued in this work. The influences of two-phase flow, field strength, and concentration of the feed on the recovery of solute were studied. The results showed that much higher recovery of solute with less solvent consumption could be achieved by using this technique to remove dyes from their aqueous streams, especially for the separation of the dilute solution. When the field strength is increased, the recovery and mass flux increase. When the feed flow rate and the initial solute concentration are increased, the recovery decreases and the mass flux increases.
Shoulder separation - aftercare
... and top of your shoulder blade A severe shoulder separation You may need surgery right away if you have: Numbness in your fingers Cold fingers Muscle weakness in your arm Severe deformity of the joint
Radiochemical separation of Cobalt
Erkelens, P.C. van
1961-01-01
A method is described for the radiochemical separation of cobalt based on the extraordinary stability of cobalt diethyldithiocarbamate. Interferences are few; only very small amounts of zinc and iron accompany cobalt, which is important in neutron-activation analysis.
Mundschau, Michael [Longmont, CO; Xie, Xiaobing [Foster City, CA; Evenson, IV, Carl; Grimmer, Paul [Longmont, CO; Wright, Harold [Longmont, CO
2011-05-24
A method for separating a hydrogen-rich product stream from a feed stream comprising hydrogen and at least one carbon-containing gas, comprising feeding the feed stream, at an inlet pressure greater than atmospheric pressure and a temperature greater than 200.degree. C., to a hydrogen separation membrane system comprising a membrane that is selectively permeable to hydrogen, and producing a hydrogen-rich permeate product stream on the permeate side of the membrane and a carbon dioxide-rich product raffinate stream on the raffinate side of the membrane. A method for separating a hydrogen-rich product stream from a feed stream comprising hydrogen and at least one carbon-containing gas, comprising feeding the feed stream, at an inlet pressure greater than atmospheric pressure and a temperature greater than 200.degree. C., to an integrated water gas shift/hydrogen separation membrane system wherein the hydrogen separation membrane system comprises a membrane that is selectively permeable to hydrogen, and producing a hydrogen-rich permeate product stream on the permeate side of the membrane and a carbon dioxide-rich product raffinate stream on the raffinate side of the membrane. A method for pretreating a membrane, comprising: heating the membrane to a desired operating temperature and desired feed pressure in a flow of inert gas for a sufficient time to cause the membrane to mechanically deform; decreasing the feed pressure to approximately ambient pressure; and optionally, flowing an oxidizing agent across the membrane before, during, or after deformation of the membrane. A method of supporting a hydrogen separation membrane system comprising selecting a hydrogen separation membrane system comprising one or more catalyst outer layers deposited on a hydrogen transport membrane layer and sealing the hydrogen separation membrane system to a porous support.
Separation techniques: Chromatography
Coskun, Ozlem
2016-01-01
Chromatography is an important biophysical technique that enables the separation, identification, and purification of the components of a mixture for qualitative and quantitative analysis. Proteins can be purified based on characteristics such as size and shape, total charge, hydrophobic groups present on the surface, and binding capacity with the stationary phase. Four separation techniques based on molecular characteristics and interaction type use mechanisms of ion exchange, surface adsorp...
Distal humeral epiphyseal separation.
Moucha, Calin S; Mason, Dan E
2003-10-01
Distal humeral epiphyseal separation is an uncommon injury that is often misdiagnosed upon initial presentation. To make a timely, correct diagnosis, the treating physician must have a thorough understanding of basic anatomical relationships and an awareness of the existence of this injury. This is a case of a child who sustained a separation of the distal humeral epiphysis, as well as multiple other bony injuries, secondary to child abuse.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Thomas T. Lei; Shawn W. Semones; John F. Walker; Barton D. Clinton; Erik T. Nilsen
2002-01-01
In the southern Appalachian forests, the regeneration of canopy trees is severely inhibited by Rhododendron maximum L., an evergreen understory shrub producing dense rhickets. While light availability is a major cause, other factors may also contribute to the absence of tree seedlings under R. maximum. We examined the effects of...
Organic Separation Test Results
Russell, Renee L.; Rinehart, Donald E.; Peterson, Reid A.
2014-09-22
Separable organics have been defined as “those organic compounds of very limited solubility in the bulk waste and that can form a separate liquid phase or layer” (Smalley and Nguyen 2013), and result from three main solvent extraction processes: U Plant Uranium Recovery Process, B Plant Waste Fractionation Process, and Plutonium Uranium Extraction (PUREX) Process. The primary organic solvents associated with tank solids are TBP, D2EHPA, and NPH. There is concern that, while this organic material is bound to the sludge particles as it is stored in the tanks, waste feed delivery activities, specifically transfer pump and mixer pump operations, could cause the organics to form a separated layer in the tank farms feed tank. Therefore, Washington River Protection Solutions (WRPS) is experimentally evaluating the potential of organic solvents separating from the tank solids (sludge) during waste feed delivery activities, specifically the waste mixing and transfer processes. Given the Hanford Tank Waste Treatment and Immobilization Plant (WTP) waste acceptance criteria per the Waste Feed Acceptance Criteria document (24590-WTP-RPT-MGT-11-014) that there is to be “no visible layer” of separable organics in the waste feed, this would result in the batch being unacceptable to transfer to WTP. This study is of particular importance to WRPS because of these WTP requirements.
Gulf stream separation dynamics
Schoonover, Joseph
Climate models currently struggle with the more traditional, coarse ( O(100 km) ) representation of the ocean. In these coarse ocean simulations, western boundary currents are notoriously difficult to model accurately. The modeled Gulf Stream is typically seen exhibiting a mean pathway that is north of observations, and is linked to a warm sea-surface temperature bias in the Mid-Atlantic Bight. Although increased resolution ( O(10 km) ) improves the modeled Gulf Stream position, there is no clean recipe for obtaining the proper pathway. The 70 year history of literature on the Gulf Stream separation suggests that we have not reached a resolution on the dynamics that control the current's pathway just south of the Mid-Atlantic Bight. Without a concrete knowledge on the separation dynamics, we cannot provide a clean recipe for accurately modeling the Gulf Stream at increased resolutions. Further, any reliable parameterization that yields a realistic Gulf Stream path must express the proper physics of separation. The goal of this dissertation is to determine what controls the Gulf Stream separation. To do so, we examine the results of a model intercomparison study and a set of numerical regional terraforming experiments. It is argued that the separation is governed by local dynamics that are most sensitive to the steepening of the continental shelf, consistent with the topographic wave arrest hypothesis of Stern (1998). A linear extension of Stern's theory is provided, which illustrates that wave arrest is possible for a continuously stratified fluid.
Separably injective Banach spaces
Avilés, Antonio; Castillo, Jesús M F; González, Manuel; Moreno, Yolanda
2016-01-01
This monograph contains a detailed exposition of the up-to-date theory of separably injective spaces: new and old results are put into perspective with concrete examples (such as l∞/c0 and C(K) spaces, where K is a finite height compact space or an F-space, ultrapowers of L∞ spaces and spaces of universal disposition). It is no exaggeration to say that the theory of separably injective Banach spaces is strikingly different from that of injective spaces. For instance, separably injective Banach spaces are not necessarily isometric to, or complemented subspaces of, spaces of continuous functions on a compact space. Moreover, in contrast to the scarcity of examples and general results concerning injective spaces, we know of many different types of separably injective spaces and there is a rich theory around them. The monograph is completed with a preparatory chapter on injective spaces, a chapter on higher cardinal versions of separable injectivity and a lively discussion of open problems and further lines o...
Maximum Bipartite Matching Size And Application to Cuckoo Hashing
Kanizo, Yossi; Keslassy, Isaac
2010-01-01
Cuckoo hashing with a stash is a robust high-performance hashing scheme that can be used in many real-life applications. It complements cuckoo hashing by adding a small stash storing the elements that cannot fit into the main hash table due to collisions. However, the exact required size of the stash and the tradeoff between its size and the memory over-provisioning of the hash table are still unknown. We settle this question by investigating the equivalent maximum matching size of a random bipartite graph, with a constant left-side vertex degree $d=2$. Specifically, we provide an exact expression for the expected maximum matching size and show that its actual size is close to its mean, with high probability. This result relies on decomposing the bipartite graph into connected components, and then separately evaluating the distribution of the matching size in each of these components. In particular, we provide an exact expression for any finite bipartite graph size and also deduce asymptotic results as the nu...
Fábio Broggi
2011-02-01
Full Text Available O Fator Capacidade de Fósforo (FCP é definido pela razão de equilíbrio entre o fator quantidade de P (Q e o fator intensidade (I e representa uma medida da capacidade do solo em manter um determinado nível de P em solução. As características e o teor dos constituintes minerais da fração argila são responsáveis por uma maior ou menor FCP, interferindo nas relações solo-planta. Por outro lado, o pH do solo tem, em alguns casos, mostrado-se com efeito na adsorção e, em outros, com pequena e não consistente alteração na Capacidade Máxima de Adsorção de P (CMAP. Objetivou-se, neste trabalho, determinar o FCP de solos mineralogicamente diferentes em Pernambuco; correlacionar características físicas e químicas dos solos com o FCP; e avaliar o efeito do pH na CMAP. Amostras subsuperficiais de quatro solos, mineralogicamente diferentes, foram caracterizadas química e fisicamente e determinado o FCP. Essas amostras foram corrigidas com CaCO3 e MgCO3 na proporção 4:1 e incubadas por 30 dias, com exceção do Vertissolo. Determinou-se a CMAP antes e após a correção dos solos. O experimento consistiu de um fatorial 4 x 2 (quatro solos com e sem correção, distribuídos em blocos ao acaso, com três repetições. As características dos solos que melhor refletiram o FCP foram o P remanescente (P-rem e a CMAP. Independentemente dos constituintes mineralógicos da fração argila, solos com elevados teores de alumínio apresentaram aumento da CMAP com a correção. A energia de adsorção (EA nos solos corrigidos foi, em média, significativamente menor, independentemente do solo.Phosphate Maximum Capacity (FCP is defined by the ratio of equilibrium between the amount of factor P (Q and factor intensity (I and represents a measure of the soil ability to maintain a certain level of P in solution. The characteristics and content of the constituents of clay minerals are responsible for a greater or lesser FCP, interfering in soil
Mass Separation by Metamaterials.
Restrepo-Flórez, Juan Manuel; Maldovan, Martin
2016-02-25
Being able to manipulate mass flow is critically important in a variety of physical processes in chemical and biomolecular science. For example, separation and catalytic systems, which requires precise control of mass diffusion, are crucial in the manufacturing of chemicals, crystal growth of semiconductors, waste recovery of biological solutes or chemicals, and production of artificial kidneys. Coordinate transformations and metamaterials are powerful methods to achieve precise manipulation of molecular diffusion. Here, we introduce a novel approach to obtain mass separation based on metamaterials that can sort chemical and biomolecular species by cloaking one compound while concentrating the other. A design strategy to realize such metamaterial using homogeneous isotropic materials is proposed. We present a practical case where a mixture of oxygen and nitrogen is manipulated using a metamaterial that cloaks nitrogen and concentrates oxygen. This work lays the foundation for molecular mass separation in biophysical and chemical systems through metamaterial devices.
Arshinoff, S A
1999-04-01
Phaco slice and separate retains the advantages of the chopping techniques of Nagahara, Koch, and Fukasaku but replaces chopping or snapping with slicing across the center of the phaco-tip-stabilized nucleus using a Nagahara chopper and then repositioning the chopper to optimally separate the divided lens halves. As the lens is rotated in the capsular bag, small pieces of the nuclear pie are sliced off, separated, emulsified, and aspirated. Emulsification and aspiration can alternatively be left until most or all the slices have been made. This technique works with a broader range of lens densities than other chopping techniques and uses no sculpting and very little phaco time. The phaco time required for this technique is relatively independent of nuclear density compared with a sculpting technique.
Membrane separation of hydrocarbons
Chang, Y. Alice; Kulkarni, Sudhir S.; Funk, Edward W.
1986-01-01
Mixtures of heavy oils and light hydrocarbons may be separated by passing the mixture through a polymeric membrane. The membrane which is utilized to effect the separation comprises a polymer which is capable of maintaining its integrity in the presence of hydrocarbon compounds and which has been modified by being subjected to the action of a sulfonating agent. Sulfonating agents which may be employed will include fuming sulfuric acid, chlorosulfonic acid, sulfur trioxide, etc., the surface or bulk modified polymer will contain a degree of sulfonation ranging from about 15 to about 50%. The separation process is effected at temperatures ranging from about ambient to about 100.degree. C. and pressures ranging from about 50 to about 1000 psig.
李文勇; 李明; 钱建平; 孙传恒; 杜尚丰; 陈梅香
2015-01-01
Image segmentation is the precondition of feature extraction and recognition. In order to improve segmentation accuracy of touching objects in pest identification and counting system, an image segmentation algorithm based on shape factor and separation point location was presented. In this method, a shape factor that was defined using area and perimeter of a region was used to be a parameter to justify whether the region was one touching region or not. In this paper, the threshold of shape factor was set to be 0.50. And then, if a region was a touching one, its contour was stripped layer by layer. In each contour, it was necessary to check whether a local segmentation point existed or not. There were two types of local segmentation points. The first type was a point that was found twice in one contour at the same time, whose traversal sequence number satisfied the determined threshold condition. The second type was one point that could be found in one contour with its four connected region points at the same time, and the difference between their traversal sequence numbers satisfied the same threshold condition. Once the local segmentation point was found, two separating points of this touching region were searched and located in its original contour. The search method was based on the shortest distance between the local segmentation and the background pixel points. At last, the segmentation lines were plotted between the local segmentation and the two separating points. In order to verify the validity of the proposed algorithm, three types of touching images, such as serial connection, loop connection and hybrid connection images were used. The results showed that the proposed method could locate the local segmentation points and separating points more accurately than the watershed method. In addition, the lab and field images were used to test reliability of the proposed method. In the lab experiment, 100 yellow peach moth (Conogethes punctiferalis
Schell, William J.
1979-01-01
A dry, fabric supported, polymeric gas separation membrane, such as cellulose acetate, is prepared by casting a solution of the polymer onto a shrinkable fabric preferably formed of synthetic polymers such as polyester or polyamide filaments before washing, stretching or calendering (so called griege goods). The supported membrane is then subjected to gelling, annealing, and drying by solvent exchange. During the processing steps, both the fabric support and the membrane shrink a preselected, controlled amount which prevents curling, wrinkling or cracking of the membrane in flat form or when spirally wound into a gas separation element.
Separation membrane development
Lee, M.W. [Savannah River Technology Center, Aiken, SC (United States)
1998-08-01
A ceramic membrane has been developed to separate hydrogen from other gases. The method used is a sol-gel process. A thin layer of dense ceramic material is coated on a coarse ceramic filter substrate. The pore size distribution in the thin layer is controlled by a densification of the coating materials by heat treatment. The membrane has been tested by permeation measurement of the hydrogen and other gases. Selectivity of the membrane has been achieved to separate hydrogen from carbon monoxide. The permeation rate of hydrogen through the ceramic membrane was about 20 times larger than Pd-Ag membrane.
Separation techniques: Chromatography
Coskun, Ozlem
2016-01-01
Chromatography is an important biophysical technique that enables the separation, identification, and purification of the components of a mixture for qualitative and quantitative analysis. Proteins can be purified based on characteristics such as size and shape, total charge, hydrophobic groups present on the surface, and binding capacity with the stationary phase. Four separation techniques based on molecular characteristics and interaction type use mechanisms of ion exchange, surface adsorption, partition, and size exclusion. Other chromatography techniques are based on the stationary bed, including column, thin layer, and paper chromatography. Column chromatography is one of the most common methods of protein purification. PMID:28058406
Separators for electrochemical cells
Carlson, Steven Allen; Anakor, Ifenna Kingsley
2014-11-11
Provided are separators for use in an electrochemical cell comprising (a) an inorganic oxide and (b) an organic polymer, wherein the inorganic oxide comprises organic substituents. Preferably, the inorganic oxide comprises an hydrated aluminum oxide of the formula Al.sub.2O.sub.3.xH.sub.2O, wherein x is less than 1.0, and wherein the hydrated aluminum oxide comprises organic substituents, preferably comprising a reaction product of a multifunctional monomer and/or organic carbonate with an aluminum oxide, such as pseudo-boehmite and an aluminum oxide. Also provided are electrochemical cells comprising such separators.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Boundary condition effects on maximum groundwater withdrawal in coastal aquifers.
Lu, Chunhui; Chen, Yiming; Luo, Jian
2012-01-01
Prevention of sea water intrusion in coastal aquifers subject to groundwater withdrawal requires optimization of well pumping rates to maximize the water supply while avoiding sea water intrusion. Boundary conditions and the aquifer domain size have significant influences on simulating flow and concentration fields and estimating maximum pumping rates. In this study, an analytical solution is derived based on the potential-flow theory for evaluating maximum groundwater pumping rates in a domain with a constant hydraulic head landward boundary. An empirical correction factor, which was introduced by Pool and Carrera (2011) to account for mixing in the case with a constant recharge rate boundary condition, is found also applicable for the case with a constant hydraulic head boundary condition, and therefore greatly improves the usefulness of the sharp-interface analytical solution. Comparing with the solution for a constant recharge rate boundary, we find that a constant hydraulic head boundary often yields larger estimations of the maximum pumping rate and when the domain size is five times greater than the distance between the well and the coastline, the effect of setting different landward boundary conditions becomes insignificant with a relative difference between two solutions less than 2.5%. These findings can serve as a preliminary guidance for conducting numerical simulations and designing tank-scale laboratory experiments for studying groundwater withdrawal problems in coastal aquifers with minimized boundary condition effects.
Maximum Work of Free-Piston Stirling Engine Generators
Kojima, Shinji
2017-04-01
Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.
Ceramic membranes for high temperature hydrogen separation
Adcock, K.D.; Fain, D.E.; James, D.L.; Powell, L.E.; Raj, T.; Roettger, G.E.; Sutton, T.G. [East Tennessee Technology Park, Oak Ridge, TN (United States)
1997-12-01
The separative performance of the authors` ceramic membranes has been determined in the past using a permeance test system that measured flows of pure gases through a membrane at temperatures up to 275 C. From these data, the separation factor was determined for a particular gas pair from the ratio of the pure gas specific flows. An important project goal this year has been to build a Mixed Gas Separation System (MGSS) for measuring the separation efficiencies of membranes at higher temperatures and using mixed gases. The MGSS test system has been built, and initial operation has been achieved. The MGSS is capable of measuring the separation efficiency of membranes at temperatures up to 600 C and pressures up to 100 psi using a binary gas mixture such as hydrogen/methane. The mixed gas is fed into a tubular membrane at pressures up to 100 psi, and the membrane separates the feed gas mixture into a permeate stream and a raffinate stream. The test membrane is sealed in a stainless steel holder that is mounted in a split tube furnace to permit membrane separations to be evaluated at temperatures up to 600 C. The compositions of the three gas streams are measured by a gas chromatograph equipped with thermal conductivity detectors. The test system also measures the temperatures and pressures of all three gas streams as well as the flow rate of the feed stream. These data taken over a range of flows and pressures permit the separation efficiency to be determined as a function of the operating conditions. A mathematical model of the separation has been developed that permits the data to be reduced and the separation factor for the membrane to be determined.
Nobuyuki Kenmochi
1996-01-01
w is constrained to have double obstacles σ*≤w≤σ* (i.e., σ* and σ* are the threshold values of w. The objective of this paper is to discuss the semigroup {S(t} associated with the phase separation model, and construct its global attractor.
Phase separation micro molding
Vogelaar, Laura
2005-01-01
The research described in this thesis concerns the development of a new microfabrication method, Phase Separation Micro Molding (PSμM). While microfabrication is still best known from semiconductor industry, where it is used to integrate electrical components on a chip, the scope has immensely expan
Fathering After Marital Separation
Keshet, Harry Finkelstein; Rosenthal, Kristine M.
1978-01-01
Deals with experiences of a group of separated or divorced fathers who chose to remain fully involved in the upbringing of their children. As they underwent transition from married parenthood to single fatherhood, these men learned that meeting demands of child care contributed to personal stability and growth. (Author)
Fritz, P. [UFZ-Umweltforschungszentrum, Centre of Environmental Research Leipzig-Halle, Leipzig (Germany)
2000-07-01
Storm-runoff thus reflects the complex hydraulic behaviour of drainage basins and water-links of such systems. Water of different origin may participate in the events and in this lecture, the application of isotope techniques to separate storm hydrographs into different components will be presented.
Dabelsteen, Hans B.
This PhD thesis asks how we can conceptualize the current separation doctrine of religion and politics in a country like Denmark, where the structure of the established church and peoplehood overlap. In order to answer this question, Hans Bruun Dabelsteen maps the current discussion of secularism...
Acromioclavicular Joint Separations
2013-01-01
Published online: 16 December 2012 # Springer Science+Business Media New York 2012 Abstract Acromioclavicular (AC) joint separations are common...injuries. The sports most likely to cause AC joint dislocations are football, soccer , hockey, rugby, and skiing, among others [9, 28, 29]. The major cause
1992-03-04
SEPARATEES Defense Outplacement Referral System (DORS) Since most of us are not independently wealthy, we will need a job after separation. DORS is...Job Assistance SPOUSES OF ALL SEPARATEES As a spouse you may take advantage of preparing Standard Form 17 1’s and resu- the outplacement services
Bill of Rights in Action, 1987
1987-01-01
The dimensions of the separation of powers principle are explored through three lessons in the subject areas of U.S. history, U.S. government, and world history. In 1748, a French nobleman, Baron de Montesquieu, wrote a book called "The Spirit of the Laws," in which he argued that there could be no liberty when all government power was…
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling
2013-09-01
Full Text Available In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain and obtain the optimal conditions and results. On this basis, we further research the effect of localization of CODP on the total cost and the relation of CODP, inventory policy and demand type through the data simulation. The results of simulation show that CODP locates in the downstream of the product life cycle, is a linear function of the product life cycle. The result indicates that the demand forecast is the main factors influencing the total cost; meanwhile the mode of production according to the demand forecast is the deciding factor of the total cost. Also the model can reflect the relation between the total cost of two-stage supply chain and inventory, demand.
Maximum solid solubility of transition metals in vanadium solvent
ZHANG Jin-long; FANG Shou-shi; ZHOU Zi-qiang; LIN Gen-wen; GE Jian-sheng; FENG Feng
2005-01-01
Maximum solid solubility (Cmax) of different transition metals in metal solvent can be described by a semi-empirical equation using function Zf that contains electronegativity difference, atomic diameter and electron concentration. The relation between Cmax and these parameters of transition metals in vanadium solvent was studied.It is shown that the relation of Cmax and function Zf can be expressed as ln Cmax = Zf = 7. 316 5-2. 780 5 (△X)2 -71. 278δ2 -0. 855 56n2/3. The factor of atomic size parameter has the largest effect on the Cmax of the V binary alloy;followed by the factor of electronegativity difference; the electrons concentration has the smallest effect among the three bond parameters. Function Zf is used for predicting the unknown Cmax of the transition metals in vanadium solvent. The results are compared with Darken-Gurry theorem, which can be deduced by the obtained function Zf in this work.
Lane, S.M.
1979-08-01
An experimental investigation of the level structure of /sup 133/Te was performed by spectroscopy of gamma-rays following the beta-decay of 2.7 min /sup 133/Sb. Multiscaled gamma-ray singles spectra and 2.5 x 10/sup 7/ gamma-gamma coincidence events were used in the assignment of 105 of the approximately 400 observed gamma-rays to /sup 133/Sb decay and in the construction of the /sup 133/Te level scheme with 29 excited levels. One hundred twenty-two gamma-rays were identified as originating in the decay of other isotopes of Sb or their daughter products. The remaining gamma-rays were associated with the decay of impurity atoms or have as yet not been identified. A new computer program based on the Lanczos tridiagonalization algorithm using an uncoupled m-scheme basis and vector manipulations was written. It was used to calculate energy levels, parities, spins, model wavefunctions, neutron and proton separation energies, and some electromagnetic transition probabilities for the following nuclei in the /sup 132/Sn region: /sup 128/Sn, /sup 129/Sn, /sup 130/Sn, /sup 131/Sn, /sup 130/Sb, /sup 131/Sb, /sup 132/Sb, /sup 133/Sb, /sup 132/Te, /sup 133/Te, /sup 134/Te, /sup 134/I, /sup 135/I, /sup 135/Xe, and /sup 136/Xe. The results are compared with experiment and the agreement is generally good. For non-magic nuclei: the lg/sub 7/2/, 2d/sub 5/2/, 2d/sub 3/2/, 1h/sub 11/2/, and 3s/sub 1/2/ orbitals are available to valence protons and the 2d/sub 5/2/, 2d/sub 3/2/, 1h/sub 11/2/, and 3s/sub 1/2/ orbitals are available to valence neutron holes. The present CDC7600 computer code can accommodate 59 single particle states and vectors comprised of 30,000 Slater determinants. The effective interaction used was that of Petrovich, McManus, and Madsen, a modification of the Kallio-Kolltveit realistic force. Single particle energies, effective charges and effective g-factors were determined from experimental data for nuclei in the /sup 132/Sn region. 116 references.
Isotopic separation by ion chromatography; La separation isotopique par chromatographie ionique
Albert, M.G.; Barre, Y.; Neige, R. [CEA Centre d`Etudes de la Vallee du Rhone, 26 - Pierrelatte (France). Dept. de Technologie de l`Enrichissement
1994-12-31
The isotopic exchange reaction and the isotopic separation factor are first recalled; the principles of ion chromatography applied to lithium isotope separation are then reviewed (displacement chromatography) and the process is modelled in the view of dimensioning and optimizing the industrial process; the various dimensioning parameters are the isotopic separation factor, the isotopic exchange kinetics and the material flow rate. Effects of the resin type and structure are presented. Dimensioning is also affected by physico-chemical and hydraulic parameters. Industrial implementation features are also discussed. 1 fig., 1 tab., 5 refs.
Radiation engineering of optical antennas for maximum field enhancement.
Seok, Tae Joon; Jamshidi, Arash; Kim, Myungki; Dhuey, Scott; Lakhani, Amit; Choo, Hyuck; Schuck, Peter James; Cabrini, Stefano; Schwartzberg, Adam M; Bokor, Jeffrey; Yablonovitch, Eli; Wu, Ming C
2011-07-13
Optical antennas have generated much interest in recent years due to their ability to focus optical energy beyond the diffraction limit, benefiting a broad range of applications such as sensitive photodetection, magnetic storage, and surface-enhanced Raman spectroscopy. To achieve the maximum field enhancement for an optical antenna, parameters such as the antenna dimensions, loading conditions, and coupling efficiency have been previously studied. Here, we present a framework, based on coupled-mode theory, to achieve maximum field enhancement in optical antennas through optimization of optical antennas' radiation characteristics. We demonstrate that the optimum condition is achieved when the radiation quality factor (Q(rad)) of optical antennas is matched to their absorption quality factor (Q(abs)). We achieve this condition experimentally by fabricating the optical antennas on a dielectric (SiO(2)) coated ground plane (metal substrate) and controlling the antenna radiation through optimizing the dielectric thickness. The dielectric thickness at which the matching condition occurs is approximately half of the quarter-wavelength thickness, typically used to achieve constructive interference, and leads to ∼20% higher field enhancement relative to a quarter-wavelength thick dielectric layer.
Microgravity Passive Phase Separator
Paragano, Matthew; Indoe, William; Darmetko, Jeffrey
2012-01-01
A new invention disclosure discusses a structure and process for separating gas from liquids in microgravity. The Microgravity Passive Phase Separator consists of two concentric, pleated, woven stainless- steel screens (25-micrometer nominal pore) with an axial inlet, and an annular outlet between both screens (see figure). Water enters at one end of the center screen at high velocity, eventually passing through the inner screen and out through the annular exit. As gas is introduced into the flow stream, the drag force exerted on the bubble pushes it downstream until flow stagnation or until it reaches an equilibrium point between the surface tension holding bubble to the screen and the drag force. Gas bubbles of a given size will form a front that is moved further down the length of the inner screen with increasing velocity. As more bubbles are added, the front location will remain fixed, but additional bubbles will move to the end of the unit, eventually coming to rest in the large cavity between the unit housing and the outer screen (storage area). Owing to the small size of the pores and the hydrophilic nature of the screen material, gas does not pass through the screen and is retained within the unit for emptying during ground processing. If debris is picked up on the screen, the area closest to the inlet will become clogged, so high-velocity flow will persist farther down the length of the center screen, pushing the bubble front further from the inlet of the inner screen. It is desired to keep the velocity high enough so that, for any bubble size, an area of clean screen exists between the bubbles and the debris. The primary benefits of this innovation are the lack of any need for additional power, strip gas, or location for venting the separated gas. As the unit contains no membrane, the transport fluid will not be lost due to evaporation in the process of gas separation. Separation is performed with relatively low pressure drop based on the large surface
Flow separation on wind turbines blades
Corten, G. P.
2001-01-01
the angle of attack. The art of designing stall rotors is to make the separated area on the blades extend in such a way, that the extracted power remains precisely constant, independent of the wind speed, while the power in the wind at cut-out exceeds the maximum power of the turbine by a factor of 8. Since the stall behaviour is influenced by many parameters, this demand cannot be easily met. However, if it can be met, the advantage of stall control is its passive operation, which is reliable and cheap. Problem Definition In practical application, stall control is not very accurate and many stall-controlled turbines do not meet their specifications. Deviations of the design-power in the order of tens of percent are regular. In the nineties, the aerodynamic research on these deviations focussed on: profile aerodynamics, computational fluid dynamics, rotational effects on separation and pressure measurements on test turbines. However, this did not adequately solve the actual problems with stall turbines. In this thesis, we therefore formulated the following as the essential question: "Does the separated blade area really extend with the wind speed, as we predict?" To find the answer a measurement technique was required, which 1) was applicable on large commercial wind turbines, 2) could follow the dynamic changes of the stall pattern, 3) was not influenced by the centrifugal force and 4) did not disturb the flow. Such a technique was not available, therefore we decided to develop it. Stall Flag Method For this method, a few hundred indicators are fixed to the rotor blades in a special pattern. These indicators, called "stall flags" are patented by the Netherlands Energy Research Foundation (ECN). They have a retro-reflective area which, depending on the flow direction, is or is not covered. A powerful light source in the field up to 500m behind the turbine illuminates the swept rotor area. The uncovered reflectors reflect the light to the source, where a digital video
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
2016-01-01
Footage of the 70 degree ISOLDE GPS separator magnet MAG70 as well as the switchyard for the Central Mass and GLM (GPS Low Mass) and GHM (GPS High Mass) beamlines in the GPS separator zone. In the GPS20 vacuum sector equipment such as the long GPS scanner 482 / 483 unit, faraday cup FC 490, vacuum valves and wiregrid piston WG210 and WG475 and radiation monitors can also be seen. Also the RILIS laser guidance and trajectory can be seen, the GPS main beamgate switch box and the actual GLM, GHM and Central Beamline beamgates in the beamlines as well as the first electrostatic quadrupoles for the GPS lines. Close up of the GHM deflector plates motor and connections and the inspection glass at the GHM side of the switchyard.
2016-01-01
Footage of the 90 and 60 degree ISOLDE HRS separator magnets in the HRS separator zone. In the two vacuum sectors HRS20 and HRS30 equipment such as the HRS slits SL240, the HRS faraday cup FC300 and wiregrid WG210 can be spotted. Vacuum valves, turbo pumps, beamlines, quadrupoles, water and compressed air connections, DC and signal cabling can be seen throughout the video. The HRS main and user beamgate in the beamline between MAG90 and MAG60 and its switchboxes as well as all vacuum bellows and flanges are shown. Instrumentation such as the HRS scanner unit 482 / 483, the HRS WG470 wiregrid and slits piston can be seen. The different quadrupoles and supports are shown as well as the RILIS guidance tubes and installation at the magnets and the different radiation monitors.
Battery separator manufacturing process
Palmer, N.I.; Sugarman, N.
1974-12-27
A battery with a positive plate, a negative plate, and a separator of polymeric resin having a degree of undesirable hydrophobia, solid below 180/sup 0/F, extrudable as a hot melt, and resistant to degradation by at least either acids or alkalies positioned between the plates is described. The separator comprises a nonwoven mat of fibers, the fibers being comprised of the polymeric resin and a wetting agent in an amount of 0.5 to 20 percent by weight based on the weight of the resin with the amount being incompatible with the resin below the melting point of the resin such that the wetting agent will bloom over a period of time at ambient temperatures in a battery, yet being compatible with the resin at the extrusion temperature and bringing about blooming to the surface of the fibers when the fibers are subjected to heat and pressure.
Dabelsteen, Hans B.
analysis, Dabelsteen study Danish secularism as an ideological concept. He finds that the conceptual structure of Danish secularism holds separation-as-principled distance at its core. Institutionally this particularly pertains to the establishment arrangement, and in practice it translates...... and proposes two conceptual expansions. The first is to include modest establishment in a framework of secularism defensible by political liberalism, and the second is to consider secularism in close connection to a theory of peoplehood. Methodologically positioned between interpretive realism and policy...... into the principle of treating everybody equally (with religious freedom, equality and Danish peoplehood as the most important principles adjacent to secularism). In a study of the historical roots of the separation doctrine and two current policy cases (same-sex marriage and reforms of church governance...
Acoustophoresis separation method
Heyman, Joseph S. (Inventor)
1993-01-01
A method and apparatus are provided for acoustophoresis, i.e., the separation of species via acoustic waves. An ultrasonic transducer applies an acoustic wave to one end of a sample container containing at least two species having different acoustic absorptions. The wave has a frequency tuned to or harmonized with the point of resonance of the species to be separated. This wave causes the species to be driven to an opposite end of the sample container for removal. A second ultrasonic transducer may be provided to apply a second, oppositely directed acoustic wave to prevent undesired streaming. In addition, a radio frequency tuned to the mechanical resonance and coupled with a magnetic field can serve to identify a species in a medium comprising species with similar absorption coefficients, whereby an acoustic wave having a frequency corresponding to this gyrational rate can then be applied to sweep the identified species to one end of the container for removal.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Todd , Matthew H
2014-01-01
In one handy volume this handbook summarizes the most common synthetic methods for the separation of racemic mixtures, allowing an easy comparison of the different strategies described in the literature.Alongside classical methods, the authors also consider kinetic resolutions, dynamic kinetic resolutions, divergent reactions of a racemic mixture, and a number of ""neglected"" cases not covered elsewhere, such as the use of circularly polarized light, polymerizations, ""ripening"" processes, dynamic combinatorial chemistry, and several thermodynamic processes. The result is a thorough introdu
Separation Logic and Concurrency
Bornat, Richard
Concurrent separation logic is a development of Hoare logic adapted to deal with pointers and concurrency. Since its inception, it has been enhanced with a treatment of permissions to enable sharing of data between threads, and a treatment of variables as resource alongside heap cells as resource. An introduction to the logic is given with several examples of proofs, culminating in a treatment of Simpson's 4-slot algorithm, an instance of racy non-blocking concurrency.
秦方普; 张爱武; 王书民; 孟宪刚; 胡少兴; 孙卫东
2015-01-01
With the development of remote sensing technology and imaging spectrometer ,the resolution of hyperspectral remote sensing image has been continually improved ,its vast amount of data not only improves the ability of the remote sensing detec-tion but also brings great difficulties for analyzing and processing at the same time .Band selection of hyperspectral imagery can effectively reduce data redundancy and improve classification accuracy and efficiency .So how to select the optimum band combi-nation from hundreds of bands of hyperspectral images is a key issue .In order to solve these problems ,we use spectral cluste-ring algorithm based on graph theory .Firstly ,taking of the original hyperspectral image bands as data points to be clustered , mutual information between every two bands is calculated to generate the similarity matrix .Then according to the graph partition theory ,spectral decomposition of the non-normalized Laplacian matrix generated by the similarity matrix is used to get the clus-ters ,which the similarity between is small and the similarity within is large .In order to achieve the purpose of dimensionality re-duction ,the inter-class separability factor of feature types on each band is calculated ,which is as the reference index to choose the representative bands in the clusters furthermore .Finally ,the support vector machine and minimum distance classification methods are employed to classify the hyperspectral image after band selection .The method in this paper is different from the tra-ditional unsupervised clustering method ,we employ spectral clustering algorithm based on graph theory and compute the inter-class separability factor based on a priori knowledge to select bands .Comparing with traditional adaptive band selection algo-rithm and band index based on automatically subspace divided algorithm ,the two sets of experiments results show that the over-all accuracy of SVM is about 94.08% and 94.24% and the overall accuracy of MDC is
Facing the truth about separability. Nothing works without energy
Frondel, Manuel; Schmidt, Christoph M. [Rheinisch-Westfaelisches Institut fuer Wirtschaftsforschung RWI, Hohenzollernstr. 1-3, D-45128 Essen (Germany)
2004-12-01
Separability is a pivotal theoretical and empirical concept in production theory. While the standard definition of separability is primarily motivated by the desire to conceptualize production decisions as a sequential process, the principal purpose of an appropriate concept of separability in empirical work is to justify the omission of variables for which data are either of poor quality or unavailable. This paper demonstrates that this empirical concept needs to be more restrictive than the classical notion of separability is. Therefore, we suggest a novel definition of separability based on cross-price elasticities that has clear empirical content. Because there is ample empirical reason to even doubt the assumption that energy is separable from all other production factors in the relatively mild form of classical separability, energy seems to be an indispensable production factor under separability aspects.
Innovative Separations Technologies
J. Tripp; N. Soelberg; R. Wigeland
2011-05-01
Reprocessing used nuclear fuel (UNF) is a multi-faceted problem involving chemistry, material properties, and engineering. Technology options are available to meet a variety of processing goals. A decision about which reprocessing method is best depends significantly on the process attributes considered to be a priority. New methods of reprocessing that could provide advantages over the aqueous Plutonium Uranium Reduction Extraction (PUREX) and Uranium Extraction + (UREX+) processes, electrochemical, and other approaches are under investigation in the Fuel Cycle Research and Development (FCR&D) Separations Campaign. In an attempt to develop a revolutionary approach to UNF recycle that may have more favorable characteristics than existing technologies, five innovative separations projects have been initiated. These include: (1) Nitrogen Trifluoride for UNF Processing; (2) Reactive Fluoride Gas (SF6) for UNF Processing; (3) Dry Head-end Nitration Processing; (4) Chlorination Processing of UNF; and (5) Enhanced Oxidation/Chlorination Processing of UNF. This report provides a description of the proposed processes, explores how they fit into the Modified Open Cycle (MOC) and Full Recycle (FR) fuel cycles, and identifies performance differences when compared to 'reference' advanced aqueous and fluoride volatility separations cases. To be able to highlight the key changes to the reference case, general background on advanced aqueous solvent extraction, advanced oxidative processes (e.g., volumetric oxidation, or 'voloxidation,' which is high temperature reaction of oxide UNF with oxygen, or modified using other oxidizing and reducing gases), and fluorination and chlorination processes is provided.
Colour Separation and Aversion
Sarah M Haigh
2012-05-01
Full Text Available Aversion to achromatic patterns is well documented but relatively little is known about discomfort from chromatic patterns. Large colour differences are uncommon in the natural environment and deviation from natural statistics makes images uncomfortable (Fernandez and Wilkins 2008, Perception, 37(7, 1098–113; Juricevic et al 2010, Perception, 39(7, 884–899. We report twelve studies documenting a linear increase in aversion to chromatic square-wave gratings as a function of the separation in UCS chromaticity between the component bars, independent of their luminance contrast. Two possible explanations for the aversion were investigated: (1 accommodative response, or (2 cortical metabolic demand. We found no correlation between chromaticity separation and accommodative lag or variance in lag, measured using an open-field autorefractor. However, near infrared spectroscopy of the occipital cortex revealed a larger oxyhaemoglobin response to patterns with large chromaticity separation. The aversion may be cortical in origin and does not appear to be due to accommodation.
Rem, P.C.; Bakker, M.C.M.; Berkhout, S.P.M.; Rahman, M.A.
2012-01-01
Eddy current separation apparatus (1) for separating particles (20) from a particle stream (w), wherein the apparatus (1) comprises a separator drum (4) adapted to create a first particle fraction (21) and a second particle fraction (23), a feeding device (2) upstream of the separator drum (4) for s
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Maddock, A.G.; Smith, F.
1959-08-25
A method is described for separating plutonium from uranium and fission products by treating a nitrate solution of fission products, uranium, and hexavalent plutonium with a relatively water-insoluble fluoride to adsorb fission products on the fluoride, treating the residual solution with a reducing agent for plutonium to reduce its valence to four and less, treating the reduced plutonium solution with a relatively insoluble fluoride to adsorb the plutonium on the fluoride, removing the solution, and subsequently treating the fluoride with its adsorbed plutonium with a concentrated aqueous solution of at least one of a group consisting of aluminum nitrate, ferric nitrate, and manganous nitrate to remove the plutonium from the fluoride.
Karraker, D.G.
1959-07-14
A liquid-liquid extraction process is presented for the recovery of polonium from lead and bismuth. According to the invention an acidic aqueous chloride phase containing the polonium, lead, and bismuth values is contacted with a tributyl phosphate ether phase. The polonium preferentially enters the organic phase which is then separated and washed with an aqueous hydrochloric solution to remove any lead or bismuth which may also have been extracted. The now highly purified polonium in the organic phase may be transferred to an aqueous solution by extraction with aqueous nitric acid.
Beaufait, L.J. Jr.; Stevenson, F.R.; Rollefson, G.K.
1958-11-18
The recovery of plutonium ions from neutron irradiated uranium can be accomplished by bufferlng an aqueous solutlon of the irradiated materials containing tetravalent plutonium to a pH of 4 to 7, adding sufficient acetate to the solution to complex the uranyl present, adding ferric nitrate to form a colloid of ferric hydroxide, plutonlum, and associated fission products, removing and dissolving the colloid in aqueous nitric acid, oxldizlng the plutonium to the hexavalent state by adding permanganate or dichromate, treating the resultant solution with ferric nitrate to form a colloid of ferric hydroxide and associated fission products, and separating the colloid from the plutonlum left in solution.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
吴晗; 吴晓东; 王庆; 朱明; 方越
2011-01-01
In view of issues of the low efficiency and poor effect in the process of commingled CO2 injection, a CO2 separate injection with concentric dual tubes was proposed. On the basis of the heat transfer principle and fluid flow theory, a mathematical model considering CO2 phase change as flowing along the wellbore of concentric dual tubes and heat transfer was established, with which temperature and pressure distributions of CO2 along the annulus between inner and outer tubes and in the inner tubing string were calculated. Moreover, effects brought about by various factors, such as injection rate, injection temperature, injection pressure, assemblage of inner and outer tubes, interval of injection layers etc. , on the pressure and temperature of CO2 flowing both in the annulus between inner and outer tubes and in the inner tubing string were investigated as well. The results indicate that on condition that wellhead injection parameters of inner and outer tubes are the same, the bigger the diameter of the outer tube, the higher the temperature of the annulus between inner and outer tubes. If the diameter of the outer tube keeps constant, the pressure of the annulus will increase with decreasing the diameter of the inner tube that has a small influence on the temperature of the annulus. When inner and outer tubes have definite diameters, the wellhead injection rate, injection temperature and intervals of injection layers may all have significant effects on temperature and pressure distributions of the annulus and the inner tube, while the wellhead injection pressure affects them a little.%针对CO2笼统注入过程中效率低、效果差等问题,提出了同心双管分注CO2技术.根据热量传递原理和流体流动理论,建立了考虑CO2相态变化的同心双管井筒流动与传热的数学模型,利用该模型研究了CO2沿内外管环空和内油管的温度和压力分布规律,分析了井口注入量、注入温度、注入压力、内外管
Separating detection and catalog production
Akhlaghi, Mohammad
2016-01-01
In the coming era of massive surveys (e.g. LSST, SKA), the role of the database designers and the algorithms they choose to adopt becomes the decisive factor in scientific progress. Systems that allow/encourage users/scientists to be more creative with the reduction/analysis algorithms can greatly enhance scientific productivity. The separation/modularity of the detection processes and catalog production is one proposal for achieving `Reduction/analysis algorithms for large databases and vice versa' (a key theme for the 26th ADASS). With the new noise-based detection paradigm, non-parametric detection is now possible for astronomical objects to very low surface brightness limits. In our implementation, one software (NoiseChisel) is in charge of detection and another (MakeCatalog) is in charge of catalog production. This modularity has many advantages for pipeline developers, and more importantly, it empowers scientific curiosity and creativity.
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
Printed Spacecraft Separation System
Holmans, Walter [Planetary Systems Corporation, Silver Springs, MD (United States); Dehoff, Ryan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-10-01
In this project Planetary Systems Corporation proposed utilizing additive manufacturing (3D printing) to manufacture a titanium spacecraft separation system for commercial and US government customers to realize a 90% reduction in the cost and energy. These savings were demonstrated via “printing-in” many of the parts and sub-assemblies into one part, thus greatly reducing the labor associated with design, procurement, assembly and calibration of mechanisms. Planetary Systems Corporation redesigned several of the components of the separation system based on additive manufacturing principles including geometric flexibility and the ability to fabricate complex designs, ability to combine multiple parts of an assembly into a single component, and the ability to optimize design for specific mechanical property targets. Shock absorption was specifically targeted and requirements were established to attenuate damage to the Lightband system from shock of initiation. Planetary Systems Corporation redesigned components based on these requirements and sent the designs to Oak Ridge National Laboratory to be printed. ORNL printed the parts using the Arcam electron beam melting technology based on the desire for the parts to be fabricated from Ti-6Al-4V based on the weight and mechanical performance of the material. A second set of components was fabricated from stainless steel material on the Renishaw laser powder bed technology due to the improved geometric accuracy, surface finish, and wear resistance of the material. Planetary Systems Corporation evaluated these components and determined that 3D printing is potentially a viable method for achieving significant cost and savings metrics.
Virus separation using membranes.
Grein, Tanja A; Michalsky, Ronald; Czermak, Peter
2014-01-01
Industrial manufacturing of cell culture-derived viruses or virus-like particles for gene therapy or vaccine production are complex multistep processes. In addition to the bioreactor, such processes require a multitude of downstream unit operations for product separation, concentration, or purification. Similarly, before a biopharmaceutical product can enter the market, removal or inactivation of potential viral contamination has to be demonstrated. Given the complexity of biological solutions and the high standards on composition and purity of biopharmaceuticals, downstream processing is the bottleneck in many biotechnological production trains. Membrane-based filtration can be an economically attractive and efficient technology for virus separation. Viral clearance, for instance, of up to seven orders of magnitude has been reported for state of the art polymeric membranes under best conditions.This chapter summarizes the fundamentals of virus ultrafiltration, diafiltration, or purification with adsorptive membranes. In lieu of an impractical universally applicable protocol for virus filtration, application of these principles is demonstrated with two examples. The chapter provides detailed methods for production, concentration, purification, and removal of a rod-shaped baculovirus (Autographa californica M nucleopolyhedrovirus, about 40 × 300 nm in size, a potential vector for gene therapy, and an industrially important protein expression system) or a spherical parvovirus (minute virus of mice, 22-26 nm in size, a model virus for virus clearance validation studies).
The maximum sizes of large scale structures in alternative theories of gravity
Bhattacharya, Sourav; Romano, Antonio Enea; Skordis, Constantinos; Tomaras, Theodore N
2016-01-01
The maximum size of a cosmic structure is given by the maximum turnaround radius -- the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulas for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulas agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the $\\Lambda$CDM value, by a factor $1 + \\frac{1}{3\\omega}$, where $\\omega\\gg 1$ is the Brans-Dicke parameter, implying consistency of the theory with current data.
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Noise and physical limits to maximum resolution of PET images
Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2007-10-01
In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.
Dependence of maximum concentration from chemical accidents on release duration
Hanna, Steven; Chang, Joseph
2017-01-01
Chemical accidents often involve releases of a total mass, Q, of stored material in a tank over a time duration, td, of less than a few minutes. The value of td is usually uncertain because of lack of knowledge of key information, such as the size and location of the hole and the pressure and temperature of the chemical. In addition, it is rare that eyewitnesses or video cameras are present at the time of the accident. For inhalation hazards, serious health effects (such as damage to the respiratory system) are determined by short term averages (pressurized liquefied chlorine releases from tanks are given, focusing on scenarios from the Jack Rabbit I (JR I) field experiment. The analytical calculations and the predictions of the SLAB dense gas dispersion model agree that the ratio of maximum C for two different td's is greatest (as much as a factor of ten) near the source. At large distances (beyond a few km for the JR I scenarios), where tt exceeds both td's, the ratio of maximum C approaches unity.
Plasma separation: physical separation at the molecular level
Gueroult, Renaud; Rax, Jean-Marcel; Fisch, Nathaniel J.
2016-09-01
Separation techniques are usually divided in two categories depending on the nature of the discriminating property: chemical or physical. Further to this difference, physical and chemical techniques differ in that chemical separation typically occurs at the molecular level, while physical separation techniques commonly operate at the macroscopic scale. Separation based on physical properties can in principle be realized at the molecular or even atomic scale by ionizing the mixture. This is in essence plasma based separation. Due to this fundamental difference, plasma based separation stands out from other separation techniques, and features unique properties. In particular, plasma separation allows separating different elements or chemical compounds based on physical properties. This could prove extremely valuable to separate macroscopically homogeneous mixtures made of substances of similar chemical formulation. Yet, the realization of plasma separation techniques' full potential requires identifying and controlling basic mechanisms in complex plasmas which exhibit suitable separation properties. In this paper, we uncover the potential of plasma separation for various applications, and identify the key physics mechanisms upon which hinges the development of these techniques.
Approximate Maximum Likelihood Commercial Bank Loan Management Model
Godwin N.O. Asemota
2009-01-01
Full Text Available Problem statement: Loan management is a very complex and yet, a vitally important aspect of any commercial bank operations. The balance sheet position shows the main sources of funds as deposits and shareholders contributions. Approach: In order to operate profitably, remain solvent and consequently grow, a commercial bank needs to properly manage its excess cash to yield returns in the form of loans. Results: The above are achieved if the bank can honor depositors withdrawals at all times and also grant loans to credible borrowers. This is so because loans are the main portfolios of a commercial bank that yield the highest rate of returns. Commercial banks and the environment in which they operate are dynamic. So, any attempt to model their behavior without including some elements of uncertainty would be less than desirable. The inclusion of uncertainty factor is now possible with the advent of stochastic optimal control theories. Thus, approximate maximum likelihood algorithm with variable forgetting factor was used to model the loan management behavior of a commercial bank in this study. Conclusion: The results showed that uncertainty factor employed in the stochastic modeling, enable us to adaptively control loan demand as well as fluctuating cash balances in the bank. However, this loan model can also visually aid commercial bank managers planning decisions by allowing them to competently determine excess cash and invest this excess cash as loans to earn more assets without jeopardizing public confidence.
Efficiency at maximum power of a discrete feedback ratchet
Jarillo, Javier; Tangarife, Tomás; Cao, Francisco J.
2016-01-01
Efficiency at maximum power is found to be of the same order for a feedback ratchet and for its open-loop counterpart. However, feedback increases the output power up to a factor of five. This increase in output power is due to the increase in energy input and the effective entropy reduction obtained as a consequence of feedback. Optimal efficiency at maximum power is reached for time intervals between feedback actions two orders of magnitude smaller than the characteristic time of diffusion over a ratchet period length. The efficiency is computed consistently taking into account the correlation between the control actions. We consider a feedback control protocol for a discrete feedback flashing ratchet, which works against an external load. We maximize the power output optimizing the parameters of the ratchet, the controller, and the external load. The maximum power output is found to be upper bounded, so the attainable extracted power is limited. After, we compute an upper bound for the efficiency of this isothermal feedback ratchet at maximum power output. We make this computation applying recent developments of the thermodynamics of feedback-controlled systems, which give an equation to compute the entropy reduction due to information. However, this equation requires the computation of the probability of each of the possible sequences of the controller's actions. This computation becomes involved when the sequence of the controller's actions is non-Markovian, as is the case in most feedback ratchets. We here introduce an alternative procedure to set strong bounds to the entropy reduction in order to compute its value. In this procedure the bounds are evaluated in a quasi-Markovian limit, which emerge when there are big differences between the stationary probabilities of the system states. These big differences are an effect of the potential strength, which minimizes the departures from the Markovianicity of the sequence of control actions, allowing also to
Blandino, Rémi; Barbieri, Marco; Etesse, Jean; Grangier, Philippe; Tualle-Brouri, Rosa
2012-01-01
We show that the maximum transmission distance of continuous-variable quantum key distribution in presence of a Gaussian noisy lossy channel can be arbitrarily increased using a linear noiseless amplifier. We explicitly consider a protocol using amplitude and phase modulated coherent states with reverse reconciliation. We find that a noiseless amplifier with amplitude gain g can increase the maximum admissible losses by a factor 1/g^2.
Particle separator scroll vanes
Lastrina, F. A.; Mayer, J. C.; Pommer, L. M.
1985-07-09
An inlet particle separator for a gas turbine engine is provided with unique vanes distributed around an entrance to a particle collection chamber. The vanes are uniquely constructed to direct extraneous particles that enter the engine into the collection chamber and prevent the particles from rebounding back into the engine's air flow stream. The vanes are provided with several features to accomplish this function, including upstream faces that are sharply angled towards air flow stream direction to cause particles to bounce towards the collection chamber. In addition, throat regions between the vanes cause a localized air flow acceleration and a focusing of the particles that aid in directing the particles in a proper direction.
Nebulized therapy. SEPAR year.
Olveira, Casilda; Muñoz, Ana; Domenech, Adolfo
2014-12-01
Inhaled drugs are deposited directly in the respiratory tract. They therefore achieve higher concentrations with faster onset of action and fewer side effects than when used systemically. Nebulized drugs are mainly recommended for patients that require high doses of bronchodilators, when they need to inhale drugs that only exist in this form (antibiotics or dornase alfa) or when they are unable to use other inhalation devices. Technological development in recent years has led to new devices that optimize pulmonary deposits and reduce the time needed for treatment. In this review we focus solely on drugs currently used, or under investigation, for nebulization in adult patients; basically bronchodilators, inhaled steroids, antibiotics, antifungals, mucolytics and others such as anticoagulants, prostanoids and lidocaine. Copyright © 2014 SEPAR. Published by Elsevier Espana. All rights reserved.
Block copolymer battery separator
Wong, David; Balsara, Nitash Pervez
2016-04-26
The invention herein described is the use of a block copolymer/homopolymer blend for creating nanoporous materials for transport applications. Specifically, this is demonstrated by using the block copolymer poly(styrene-block-ethylene-block-styrene) (SES) and blending it with homopolymer polystyrene (PS). After blending the polymers, a film is cast, and the film is submerged in tetrahydrofuran, which removes the PS. This creates a nanoporous polymer film, whereby the holes are lined with PS. Control of morphology of the system is achieved by manipulating the amount of PS added and the relative size of the PS added. The porous nature of these films was demonstrated by measuring the ionic conductivity in a traditional battery electrolyte, 1M LiPF.sub.6 in EC/DEC (1:1 v/v) using AC impedance spectroscopy and comparing these results to commercially available battery separators.
Block copolymer battery separator
Wong, David; Balsara, Nitash Pervez
2016-04-26
The invention herein described is the use of a block copolymer/homopolymer blend for creating nanoporous materials for transport applications. Specifically, this is demonstrated by using the block copolymer poly(styrene-block-ethylene-block-styrene) (SES) and blending it with homopolymer polystyrene (PS). After blending the polymers, a film is cast, and the film is submerged in tetrahydrofuran, which removes the PS. This creates a nanoporous polymer film, whereby the holes are lined with PS. Control of morphology of the system is achieved by manipulating the amount of PS added and the relative size of the PS added. The porous nature of these films was demonstrated by measuring the ionic conductivity in a traditional battery electrolyte, 1M LiPF.sub.6 in EC/DEC (1:1 v/v) using AC impedance spectroscopy and comparing these results to commercially available battery separators.
Measuring of the maximum measurable velocity for dual-frequency laser interferometer
Zhiping Zhang; Zhaogu Cheng; Zhaoyu Qin; Jianqiang Zhu
2007-01-01
There is an increasing demand on the measurable velocity of laser interferometer in manufacturing technologies. The maximum measurable velocity is limited by frequency difference of laser source, optical configuration, and electronics bandwidth. An experimental setup based on free falling movement has been demonstrated to measure the maximum easurable velocity for interferometers. Measurement results show that the maximum measurable velocity is less than its theoretical value. Moreover, the effect of kinds of factors upon the measurement results is analyzed, and the results can offer a reference for industrial applications.
Beyond maximum entropy: Fractal Pixon-based image reconstruction
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
The Use of Demulsifiers for Separating Water from Anthracene Oil
Zečević, N.
2008-03-01
increasing aromaticity. It is also used for determination of the Bureau of Mines Correlation Index (BMCI,2 which is obtained either from density and midboiling point, or from density andviscosity for those feedstocks which cannot be distilled completely. This index is used by the carbon black industry as an important criteria for feedstock evaluation.The sulphur fraction in feedstocks should not exceed w = 2.5 ·10–2, because a higher content greatly affects the quality of carbon black, pollutes the atmosphere, and accelerates corrosion of the facility. The maximum sulphur content in the typical hydrocarbon feedstock is w = 1.2 · 10–2.3. A very important factor of hydrocarbon feedstock is the fraction of alkaline earth metals, especially sodium and potassium. The maximum sodium fraction may be w = 20·10–6, while the maximum potassium fraction is w = 2·10 –6.The maximum fraction of asphalthenes is w = 15 ·10–2. Asphalthenes, determined as pentane-insoluble matter, provide indications concerning the possibility of grit formation. Another very important factor is the temperature range of distillation, which should be low enough, because the hydrocarbon feedstock must evaporize before entering the hot region of the reactor. The viscosity, the pour point, and for safety reasons, the flash point determines the handling properties and storage conditions of the feedstock.In addition, the water fraction in the hydrocarbon feedstock is one of the most important factors. The water fraction in hydrocarbon feedstock influences the handling properties of the same. The maximum water fraction in hydrocarbon feedstock may be w = 2.0·10–2, and desirably below w = 1.0·10–2. A higher water fraction represent a considerable impact on the financial construction. Also, it is very difficult to manipulate such feedstock, especially unloading, and in the production of oil-furnace carbon black. Namely, every water fraction higher than w = 2.0·10–2 in the hydrocarbon feedstock
Gravity separation for oil wastewater treatment
Golomeova, Mirjana; Zendelska, Afrodita; Krstev, Boris; Krstev, Aleksandar
2010-01-01
In this paper, the applications of gravity separation for oil wastewater treatment are presented. Described is operation on conventional gravity separation and parallel plate separation. Key words: gravity separation, oil, conventional gravity separation, parallel plate separation.
Gravity separation for oil wastewater treatment
Golomeova, Mirjana; Zendelska, Afrodita; Krstev, Boris; Krstev, Aleksandar
2010-01-01
In this paper, the applications of gravity separation for oil wastewater treatment are presented. Described is operation on conventional gravity separation and parallel plate separation. Key words: gravity separation, oil, conventional gravity separation, parallel plate separation.
[Effect of trypsin on the rat keratinocyte separation and subculture].
Ouyang, An-Li; Zhou, Yan; Hua, Ping; Tan, Wen-Song
2002-01-01
The effect of trypsin on the separation an subculture of the keratinocytes was investigated in this work. It was found that when 0.25% trypsin was employed for 5 minutes to separate keratinocytes, the number of active keratinocytes and the cells capable of forming colony were higher than those of other experimental conditions. The maximum attached ratio of primary keratinocytes was obtained when skin tissues were treated at 0.05% concentration of trypsin. With the increase of the trypsin concentrations, the attached ratio, attachment rate constant, and colony forming efficiency were all increased. Thus, 0.25% concentration of trypsin was recommended for separating and subculturing the keratinocytes.
Cyclonic Separation Technology： Researches and Developments
汪华林; 张艳红; 王剑刚; 刘洪来
2012-01-01
Centered on thetechniques and industrial applications of the reinforced cyclonic separation process, its principles and mechanism for separation ot ions, molecules and their aggregates using polyalsperse aroplets are discussed generally; the characteristics and influential factors of fish-hook phenomenon of the grade efficiency curve in cyclonic separation for both gas and liquid are analyzed; and the influence of shear force on particle be- havior （or that of particle swarm） is also summarized. A novel idea for cyclonic separation is presented here： enhancing the cyclonic seoaration process of ions, molecules and their aggregates with monodisperse microspheres and their surface grafting, rearranging the distribution of particles by size using centrifugal field, reinforcing the cyclonic separation performance with orderly arranged particle swarm. Also the investigation of the shortcut flow, recirculation flow, the asymmetric structure and non-linear characteristics of the cyclonic flow field with a com-bined method of Volumetric 3-component Velocimetry （V3V） and Phase-Doppler Particle Anemometer （PDPA） are elaborated. It is recommended to develop new systems for the separation of heterogeneous phases with cyclonic technology, in accordance with the capture and reuse of CO2, methanol to olefins （MTO） process, coal transfer, andthe exploitation of oil shale.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Selectivity in capillary electrokinetic separations
de Zeeuw, R.A; de Jong, G.J.; Ensing, K
1999-01-01
This review gives a survey of selectivity modes in capillary electrophoresis separations in pharmaceutical analysis and bioanalysis. Despite the high efficiencies of these separation techniques, good selectivity is required to allow quantitation or identification of a Chemistry and Toxicology, parti
Physical Separation in the Workplace
Stea, Diego; Foss, Nicolai Juul; Holdt Christensen, Peter
2015-01-01
Physical separation is pervasive in organizations, and has powerful effects on employee motivation and organizational behaviors. However, research shows that workplace separation is characterized by a variety of tradeoffs, tensions, and challenges that lead to both positive and negative outcomes...
Determine separations process strategy decision
Slaathaug, E.J.
1996-01-01
This study provides a summary level comparative analysis of selected, top-level, waste treatment strategies. These strategies include No Separations, Separations (high-level/low-level separations), and Deferred Separations of the tank waste. These three strategies encompass the full range of viable processing alternatives based upon full retrieval of the tank wastes. The assumption of full retrieval of the tank wastes is a predecessor decision and will not be revisited in this study.
Composite separators and redox flow batteries based on porous separators
Li, Bin; Wei, Xiaoliang; Luo, Qingtao; Nie, Zimin; Wang, Wei; Sprenkle, Vincent L.
2016-01-12
Composite separators having a porous structure and including acid-stable, hydrophilic, inorganic particles enmeshed in a substantially fully fluorinated polyolefin matrix can be utilized in a number of applications. The inorganic particles can provide hydrophilic characteristics. The pores of the separator result in good selectivity and electrical conductivity. The fluorinated polymeric backbone can result in high chemical stability. Accordingly, one application of the composite separators is in redox flow batteries as low cost membranes. In such applications, the composite separator can also enable additional property-enhancing features compared to ion-exchange membranes. For example, simple capacity control can be achieved through hydraulic pressure by balancing the volumes of electrolyte on each side of the separator. While a porous separator can also allow for volume and pressure regulation, in RFBs that utilize corrosive and/or oxidizing compounds, the composite separators described herein are preferable for their robustness in the presence of such compounds.
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
Separating Underdetermined Convolutive Speech Mixtures
Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan
2006-01-01
a method for underdetermined blind source separation of convolutive mixtures. The proposed framework is applicable for separation of instantaneous as well as convolutive speech mixtures. It is possible to iteratively extract each speech signal from the mixture by combining blind source separation...
General Motors sidestream separator
Tessier, R.J.
1981-01-01
On February 15, 1980, the United States Environmental Protection Agency, acting pursuant to Paragraph 113(D) (4) of the Clean Air Act, issued to General Motors an innovative technology order covering fifteen coal-fired spreader-stoker boilers located at six General Motors plants in Ohio. The purpose and effect of this order was to permit General Motors time to develop a new, innovative technique for controlling particulate emissions from the specified boilers before compliance with the federally approved Ohio particulate control regulation was required. This new technology was christened, The Sidestream Separator, by General Motors. It provides a highly cost effective means of reducing particulate emissions below levels currently obtainable with conventionally used high efficiency mechanical collectors. These improvements could prove to be of substantial benefit to many industrial facilities with spreader-stoker coal-fired boilers that cannot be brought into compliance with applicble air pollution regulations except by application of far more expensive and unwieldly electrostatic precipitators (ESP's) or fabric filters (baghouses).
PARAFFIN SEPARATION VACUUM DISTILLATION
Zaid A. Abdulrahman
2013-05-01
Full Text Available Simulated column performance curves were constructed for existing paraffin separation vacuum distillation column in LAB plant (Arab Detergent Company/Baiji-Iraq. The variables considered in this study are the thermodynamic model option, top vacuum pressure, top and bottom temperatures, feed temperature, feed composition & reflux ratio. Also simulated columns profiles for the temperature, vapor & liquid flow rates composition were constructed. Four different thermodynamic model options (SRK, TSRK, PR, and ESSO were used, affecting the results within 1-25% variation for the most cases.The simulated results show that about 2% to 8 % of paraffin (C10, C11, C12, & C13 present at the bottom stream which may cause a problem in the LAB plant. The major variations were noticed for the top temperature & the paraffin weight fractions at bottom section with top vacuum pressure. The bottom temperature above 240 oC is not recommended because the total bottom flow rate decreases sharply, where as the weight fraction of paraffins decrease slightly. The study gives evidence about a successful simulation with CHEMCAD
An Investigation into Separation of Impurity from Saffron Stigma Using an Electrostatic Separator
H Mortezapour
2015-03-01
Full Text Available In the present study, a laboratory electrostatic separator was constructed and its separation potential of white saffron impurities from stigma was investigated. The device was comprised of a nylon ribbon which moves in contact with a woolen brush and was charged by the triboelectric effect. The charged ribbon, then, moved over the material pan. Since the electrostatic behavior vary from various materials, their attraction to the ribbon differ. The separation tests were conducted at three levels of ribbon position (with 1.5, 2.5 and 3.5 cm from the material pan, three drum speeds (50, 60 and 70 rpm and three working times (120, 18 and 240 seconds. The results showed that material absorption increased as working time increased and the ribbon distance decreased. Meanwhile, rising the speed from 50 to 60 rpm improved material absorption while, more increasing from 60 to 70 rpm reduced the absorption. A maximum impurity separation of 97% was observed with ribbon distance of 1.5 cm, ribbon speed of 60 rpm and working for 240 seconds. The minimum stigma losses were found to be about 2% when the ribbon distance and speed were 3.5 cm and 70 rpm, respectively, and the separator worked for 120 seconds.
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
The effect of natural selection on the performance of maximum parsimony
Ofria Charles
2007-06-01
Full Text Available Abstract Background Maximum parsimony is one of the most commonly used and extensively studied phylogeny reconstruction methods. While current evaluation methodologies such as computer simulations provide insight into how well maximum parsimony reconstructs phylogenies, they tell us little about how well maximum parsimony performs on taxa drawn from populations of organisms that evolved subject to natural selection in addition to the random factors of drift and mutation. It is clear that natural selection has a significant impact on Among Site Rate Variation (ASRV and the rate of accepted substitutions; that is, accepted mutations do not occur with uniform probability along the genome and some substitutions are more likely to occur than other substitutions. However, little is know about how ASRV and non-uniform character substitutions impact the performance of reconstruction methods such as maximum parsimony. To gain insight into these issues, we study how well maximum parsimony performs with data generated by Avida, a digital life platform where populations of digital organisms evolve subject to natural selective pressures. Results We first identify conditions where natural selection does affect maximum parsimony's reconstruction accuracy. In general, as we increase the probability that a significant adaptation will occur in an intermediate ancestor, the performance of maximum parsimony improves. In fact, maximum parsimony can correctly reconstruct small 4 taxa trees on data that have received surprisingly many mutations if the intermediate ancestor has received a significant adaptation. We demonstrate that this improved performance of maximum parsimony is attributable more to ASRV than to non-uniform character substitutions. Conclusion Maximum parsimony, as well as most other phylogeny reconstruction methods, may perform significantly better on actual biological data than is currently suggested by computer simulation studies because of natural
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
Analysis and optimization of an air-launch-to-orbit separation
Sohier, Henri; Piet-Lahanier, Helene; Farges, Jean-Loup
2015-03-01
In an air-launch-to-orbit, a space rocket is launched from a carrier aircraft. Air-launch-to-orbit appears as particularly interesting for nano- and microsatellites which are generally launched as secondary loads, that is, placed in the conventional launch vehicle's payload section with a larger primary satellite. In an air-launch-to-orbit, a small satellite can be launched alone as a primary load, away from a carrier aircraft, aboard a smaller rocket vehicle, and in doing so, benefit from more flexible dates and trajectories. One of the most important phases of the mission is the separation between the carrier aircraft and the space rocket. A flight simulator including a large number of factors of uncertainties has been especially developed to study the separation, and a safety criteria has been defined with respect to store collision avoidance. It is used for a sensitivity analysis and an optimization of the possible trajectories. The sensitivity analysis first requires a screening method to select unessential factors that can be held constant. The Morris method is amongst the most popular screening methods. It requires limited calculations, but may result in keeping constant an essential factor which would greatly affect the results of the sensitivity analysis. This paper shows that this risk can be important in spite of recent improvements of the Morris method. It presents an adaptation of this method which divides this risk by a factor of ten on a standard test function. It is based on the maximum of the elementary effects instead of their average. The method focuses the calculations on the factors with a low impact, checking the convergence of this set of factors, and uses two different factor variations instead of one. This adaptation of the Morris method is used to limit the amount of the air-launch-to-orbit simulations and simplify the uncertainty domain for analysis by Sobol's method. The aerodynamic perturbations due to wind, the parameters defining the
Separation of non-ferrous metals from ASR by corona electrostatic separation
Kim, Yang-soo; Choi, Jin-Young; Jeon, Ho-Seok; Han, Oh-Hyung; Park, Chul-Hyun
2016-04-01
Automotive shredder residue (ASR), the residual fraction of approximate 25% obtained after dismantling and shredding from waste car, consists of polymers (plastics and rubber), metals (ferrous and non-ferrous), wood, glass and fluff (textile and fiber). ASR cannot be effectively separated due to its heterogeneous materials and coated or laminated complexes and then largely deposited in land-fill sites as waste. Thus reducing a pollutant release before disposal, techniques that can improve the liberation of coated (or laminated) complexes and the recovery of valuable metals from the shredder residue are needed. ASR may be separated by a series of physical processing operations such as comminution, air, magnetic and electrostatic separations. The work deals with the characterization of the shredder residue coming from an industrial plant in korea and focuses on estimating the optimal conditions of corona electrostatic separation for improving the separation efficiency of valuable non-ferrous metals such as aluminum, copper and etc. From the results of test, the maximum separation achievable for non-ferrous metals using a corona electrostatic separation has been shown to be recovery of 92.5% at a grade of 75.8%. The recommended values of the process variables, particle size, electrode potential, drum speed, splitter position and relative humidity are -6mm, 50 kV, 35rpm, 20° and less 40%, respectively. Acknowledgments This study was supported by the R&D Center for Valuable Recycling (Global-Top R&BD Program) of the Ministry of Environment. (Project No. GT-11-C-01-170-0)
Understanding Peripheral Bat Populations Using Maximum-Entropy Suitability Modeling
Barnhart, Paul R.; Gillam, Erin H.
2016-01-01
Individuals along the periphery of a species distribution regularly encounter more challenging environmental and climatic conditions than conspecifics near the center of the distribution. Due to these potential constraints, individuals in peripheral margins are expected to change their habitat and behavioral characteristics. Managers typically rely on species distribution maps when developing adequate management practices. However, these range maps are often too simplistic and do not provide adequate information as to what fine-scale biotic and abiotic factors are driving a species occurrence. In the last decade, habitat suitability modelling has become widely used as a substitute for simplistic distribution mapping which allows regional managers the ability to fine-tune management resources. The objectives of this study were to use maximum-entropy modeling to produce habitat suitability models for seven species that have a peripheral margin intersecting the state of North Dakota, according to current IUCN distributions, and determine the vegetative and climatic characteristics driving these models. Mistnetting resulted in the documentation of five species outside the IUCN distribution in North Dakota, indicating that current range maps for North Dakota, and potentially the northern Great Plains, are in need of update. Maximum-entropy modeling showed that temperature and not precipitation were the variables most important for model production. This fine-scale result highlights the importance of habitat suitability modelling as this information cannot be extracted from distribution maps. Our results provide baseline information needed for future research about how and why individuals residing in the peripheral margins of a species’ distribution may show marked differences in habitat use as a result of urban expansion, habitat loss, and climate change compared to more centralized populations. PMID:27935936
Approximating the maximum weight clique using replicator dynamics.
Bomze, I R; Pelillo, M; Stix, V
2000-01-01
Given an undirected graph with weights on the vertices, the maximum weight clique problem (MWCP) is to find a subset of mutually adjacent vertices (i.e., a clique) having the largest total weight. This is a generalization of the classical problem of finding the maximum cardinality clique of an unweighted graph, which arises as a special case of the MWCP when all the weights associated to the vertices are equal. The problem is known to be NP-hard for arbitrary graphs and, according to recent theoretical results, so is the problem of approximating it within a constant factor. Although there has recently been much interest around neural-network algorithms for the unweighted maximum clique problem, no effort has been directed so far toward its weighted counterpart. In this paper, we present a parallel, distributed heuristic for approximating the MWCP based on dynamics principles developed and studied in various branches of mathematical biology. The proposed framework centers around a recently introduced continuous characterization of the MWCP which generalizes an earlier remarkable result by Motzkin and Straus. This allows us to formulate the MWCP (a purely combinatorial problem) in terms of a continuous quadratic programming problem. One drawback associated with this formulation, however, is the presence of "spurious" solutions, and we present characterizations of these solutions. To avoid them we introduce a new regularized continuous formulation of the MWCP inspired by previous works on the unweighted problem, and show how this approach completely solves the problem. The continuous formulation of the MWCP naturally maps onto a parallel, distributed computational network whose dynamical behavior is governed by the so-called replicator equations. These are dynamical systems introduced in evolutionary game theory and population genetics to model evolutionary processes on a macroscopic scale.We present theoretical results which guarantee that the solutions provided by
Estimates of chemical compaction and maximum burial depth from bedding parallel stylolites
Gasparrini, Marta; Beaudoin, Nicolas; Lacombe, Olivier; David, Marie-Eleonore; Youssef, Souhail; Koehn, Daniel
2017-04-01
Chemical compaction is a diagenetic process affecting sedimentary series during burial that develops rough dissolution surfaces named Bedding Parallel Stylolites (BPS). BPS are related to the dissolution of important rock volumes and can lead to porosity reduction around them due to post-dissolution cementation. Our understanding of the effect of chemical compaction on rock volume and porosity evolution during basin burial is however too tight yet to be fully taken into account in basin models and thermal or fluid-flow simulations. This contribution presents a novel and multidisciplinary approach to quantify chemical compaction and to estimate maximum paleodepth of burial, applied to the Dogger carbonate reservoirs from the Paris Basin sub-surface. This succession experienced a relatively simple burial history (nearly continuous burial from Upper Jurassic to Upper Cretaceous, followed by a main uplift phase), and mainly underwent normal overburden (inducing development of BPS), escaping major tectonic stress episodes. We considered one core from the depocentre and one from the eastern margin of the basin in the same stratigraphic interval (Bathonian Sup. - Callovian Inf.; restricted lagoonal setting), and analysed the macro- and micro-facies to distinguish five main depositional environments. Type and abundance of BPS were continuously recorded along the logs and treated statistically to obtain preliminary rules relying the occurrence of the BPS as a function of the contrasting facies and burial histories. The treatment of high resolution 2D images allowed the identification and separation of the BPS to evaluate total stylolitization density and insoluble thickness as an indirect measure of the dissolved volume, with respect to the morphology of the BPS considered. Based on the morphology of the BPS roughness, we used roughness signal analysis method to reconstruct the vertical paleo-stress (paleo-depth) recorded by the BPS during chemical compaction. The
The problem of the maximum volumes and particle horizon in the Friedmann universe model
Gong, S. M.
1989-08-01
The maximum volume of the closed Friedmann universe is further investigated and is shown to be 2 x pi squared x R cubed (t), instead of pi squared x R cubed (t) as found previously. This discrepancy comes from the incomplete use of the volume formula of 3-dimensional spherical space in the astronomical literature. Mathematically, the maximum volume exists at any cosmic time t in a 3-dimensional spherical case. However, the Friedmann closed universe in expansion reaches its maximum volume only at the time of the maximum scale factor. The particle horizon has no limitation for the farthest objects in the closed Friedmann universe if the proper distance of objects is compared with the particle horizon as is should be. This leads to absurdity if the luminosity distance of objects is compared with the proper distance of the particle horizon.
Present and Last Glacial Maximum climates as states of maximum entropy production
Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere
2011-01-01
The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...
Separators - Technology review: Ceramic based separators for secondary batteries
Nestler, Tina; Schmid, Robert; Münchgesang, Wolfram; Bazhenov, Vasilii; Schilm, Jochen; Leisegang, Tilmann; Meyer, Dirk C.
2014-06-01
Besides a continuous increase of the worldwide use of electricity, the electric energy storage technology market is a growing sector. At the latest since the German energy transition ("Energiewende") was announced, technological solutions for the storage of renewable energy have been intensively studied. Storage technologies in various forms are commercially available. A widespread technology is the electrochemical cell. Here the cost per kWh, e. g. determined by energy density, production process and cycle life, is of main interest. Commonly, an electrochemical cell consists of an anode and a cathode that are separated by an ion permeable or ion conductive membrane - the separator - as one of the main components. Many applications use polymeric separators whose pores are filled with liquid electrolyte, providing high power densities. However, problems arise from different failure mechanisms during cell operation, which can affect the integrity and functionality of these separators. In the case of excessive heating or mechanical damage, the polymeric separators become an incalculable security risk. Furthermore, the growth of metallic dendrites between the electrodes leads to unwanted short circuits. In order to minimize these risks, temperature stable and non-flammable ceramic particles can be added, forming so-called composite separators. Full ceramic separators, in turn, are currently commercially used only for high-temperature operation systems, due to their comparably low ion conductivity at room temperature. However, as security and lifetime demands increase, these materials turn into focus also for future room temperature applications. Hence, growing research effort is being spent on the improvement of the ion conductivity of these ceramic solid electrolyte materials, acting as separator and electrolyte at the same time. Starting with a short overview of available separator technologies and the separator market, this review focuses on ceramic-based separators
Relations between psychological separation and adaptation of adolescents
Vukelić Marija
2006-01-01
Full Text Available The object of this research is a problem of relations between psychological separation-individuation as well as adaptation to secondary and boarding school and differences in separation and adaptation. Explorative research was performed on the sample of 586 adolescents aged 14-16. The instruments used were: The Psychological Separation Inventory (PSI, Hoffman, 1984, and The Student Adaptation to College Questionnaire (SACQ, Baker & Siryk, 1984. The results showed that adolescents from boarding schools, comparing to those who are not separated from parents during secondary school, have significant higher level of separation of both parents, but discriminate analysis showed that adolescents from boarding schools express nostalgia for their parents and wants more contacts and support from them. Adolescent from boarding school showed general better adaptation, but lower emotional adaptation comparing to not separate adolescents. Discriminate analysis showed that adolescents from boarding schools express low satisfaction with life in boarding school. The results confirm hypothesis of connection between psychological separation from parents and adaptation in adolescence. Canonical correlation analysis showed two statistically significant canonical factors. First factor shows significant connection of lower independence and better adaptation, with 23% explained variance. Second factor indicates connection of lower functional, emotional and attitude independence and better adaptation, with 12% of explained variance. Results are argued in light of theory separation-individuation and importance of meaning of separation from their parents for adolescents for adaptation on request for adaptation on secondary school and boarding school.
Mathematical modelling of membrane separation
Vinther, Frank
This thesis concerns mathematical modelling of membrane separation. The thesis consists of introductory theory on membrane separation, equations of motion, and properties of dextran, which will be the solute species throughout the thesis. Furthermore, the thesis consist of three separate mathemat......This thesis concerns mathematical modelling of membrane separation. The thesis consists of introductory theory on membrane separation, equations of motion, and properties of dextran, which will be the solute species throughout the thesis. Furthermore, the thesis consist of three separate....... It is found that the probability of entering the pore is highest when the largest of the radii in the ellipse is equal to half the radius of the pore, in case of molecules with circular radius less than the pore radius. The results are directly related to the macroscopic distribution coefficient...
Capillary Separation: Micellar Electrokinetic Chromatography
Terabe, Shigeru
2009-07-01
Micellar electrokinetic chromatography (MEKC), a separation mode of capillary electrophoresis (CE), has enabled the separation of electrically neutral analytes. MEKC can be performed by adding an ionic micelle to the running solution of CE without modifying the instrument. Its separation principle is based on the differential migration of the ionic micelles and the bulk running buffer under electrophoresis conditions and on the interaction between the analyte and the micelle. Hence, MEKC's separation principle is similar to that of chromatography. MEKC is a useful technique particularly for the separation of small molecules, both neutral and charged, and yields high-efficiency separation in a short time with minimum amounts of sample and reagents. To improve the concentration sensitivity of detection, several on-line sample preconcentration techniques such as sweeping have been developed.
Separable programming theory and methods
Stefanov, Stefan M
2001-01-01
In this book, the author considers separable programming and, in particular, one of its important cases - convex separable programming Some general results are presented, techniques of approximating the separable problem by linear programming and dynamic programming are considered Convex separable programs subject to inequality equality constraint(s) and bounds on variables are also studied and iterative algorithms of polynomial complexity are proposed As an application, these algorithms are used in the implementation of stochastic quasigradient methods to some separable stochastic programs Numerical approximation with respect to I1 and I4 norms, as a convex separable nonsmooth unconstrained minimization problem, is considered as well Audience Advanced undergraduate and graduate students, mathematical programming operations research specialists
Separation process using microchannel technology
Tonkovich, Anna Lee; Perry, Steven T.; Arora, Ravi; Qiu, Dongming; Lamont, Michael Jay; Burwell, Deanna; Dritz, Terence Andrew; McDaniel, Jeffrey S.; Rogers, Jr.; William A.; Silva, Laura J.; Weidert, Daniel J.; Simmons, Wayne W.; Chadwell, G. Bradley
2009-03-24
The disclosed invention relates to a process and apparatus for separating a first fluid from a fluid mixture comprising the first fluid. The process comprises: (A) flowing the fluid mixture into a microchannel separator in contact with a sorption medium, the fluid mixture being maintained in the microchannel separator until at least part of the first fluid is sorbed by the sorption medium, removing non-sorbed parts of the fluid mixture from the microchannel separator; and (B) desorbing first fluid from the sorption medium and removing desorbed first fluid from the microchannel separator. The process and apparatus are suitable for separating nitrogen or methane from a fluid mixture comprising nitrogen and methane. The process and apparatus may be used for rejecting nitrogen in the upgrading of sub-quality methane.
Wastewater treatment with acoustic separator
Kambayashi, Takuya; Saeki, Tomonori; Buchanan, Ian
2017-07-01
Acoustic separation is a filter-free wastewater treatment method based on the forces generated in ultrasonic standing waves. In this report, a batch-system separator based on acoustic separation was demonstrated using a small-scale prototype acoustic separator to remove suspended solids from oil sand process-affected water (OSPW). By applying an acoustic separator to the batch use OSPW treatment, the required settling time, which was the time that the chemical oxygen demand (COD) decreased to the environmental criterion (<200 mg/L), could be shortened from 10 to 1 min. Moreover, for a 10 min settling time, the acoustic separator could reduce the FeCl3 dose as coagulant in OSPW treatment from 500 to 160 mg/L.
Moving related to separation : who moves and to what distance
Mulder, Clara H.; Malmberg, Gunnar
2011-01-01
We address the issue of moving from the joint home on the occasion of separation. Our research question is: To what extent can the occurrence of moves related to separation, and the distance moved, be explained by ties to the location, resources, and other factors influencing the likelihood of movin
Separation Method of Neptunium From Large Amount of Plutonium
JIN; Hua; SU; Yu-lan; YING; Zhe-cong; ZHAO; Sheng-yang
2013-01-01
A new separation method of neptunium from large amount of plutonium by TEVA column has been developed.A series of influence factors are studied such as resin’s types,valence adjusting of Np and Pu,extraction and elution behavior of Np on TEVA resin.According to above works,a separation procedure is recommended as follows:1)Adjusting the
Hereditary separability in Hausdorff continua
D. Daniel
2012-04-01
Full Text Available We consider those Hausdorff continua S such that each separable subspace of S is hereditarily separable. Due to results of Ostaszewski and Rudin, respectively, all monotonically normal spaces and therefore all continuous Hausdorff images of ordered compacta also have this property. Our study focuses on the structure of such spaces that also possess one of various rim properties, with emphasis given to rim-separability. In so doing we obtain analogues of results of M. Tuncali and I. Loncar, respectively.
Liddell, Bette
2001-01-01
This report explores a range of underlying factors which appear to motivate the social behaviour of adults with severe learning difficulties. While there is ample evidence to suggest that these adults often behave in ways viewed as unacceptable by the wider population a skills deficit approach to the issue is frequently adopted. This dissertation argues that this view is both over-simplistic and inappropriately judgmental and that the behaviours demonstrated often serve an important purpose i...
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
Tonini, A.; Pede, V.
2011-01-01
In this paper, a stochastic frontier model accounting for spatial dependency is developed using generalized maximum entropy estimation. An application is made for measuring total factor productivity in European agriculture. The empirical results show that agricultural productivity growth in Europe i
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
Scheike, Thomas Harder; Juul, Anders
2004-01-01
-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used...
Relationship between oral status and maximum bite force in preschool children
Ching-Ming Su
2009-03-01
Conclusion: By combining the results of this study, it was concluded that associations of bite force with factors like age, maximum mouth opening and the number of teeth in contact were clearer than for other variables such as body height, body weight, occlusal pattern, and tooth decay or fillings.
Dai, Xiaoping; Han, Yuping; Zhang, Xiaohong; Hu, Wei; Huang, Liangji; Duan, Wenpei; Li, Siyi; Liu, Xiaolu; Wang, Qian
2017-09-01
A better understanding of willingness to separate waste and waste separation behaviour can aid the design and improvement of waste management policies. Based on the intercept questionnaire survey data of undergraduate students and residents in Zhengzhou City of China, this article compared factors affecting the willingness and behaviour of students and residents to participate in waste separation using two binary logistic regression models. Improvement opportunities for waste separation were also discussed. Binary logistic regression results indicate that knowledge of and attitude to waste separation and acceptance of waste education significantly affect the willingness of undergraduate students to separate waste, and demographic factors, such as gender, age, education level, and income, significantly affect the willingness of residents to do so. Presence of waste-specific bins and attitude to waste separation are drivers of waste separation behaviour for both students and residents. Improved education about waste separation and facilities are effective to stimulate waste separation, and charging on unsorted waste may be an effective way to improve it in Zhengzhou.
An Interval Maximum Entropy Method for Quadratic Programming Problem
RUI Wen-juan; CAO De-xin; SONG Xie-wu
2005-01-01
With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.
Performance of a New Magnetic Chitosan Nanoparticle to Remove Arsenic and Its Separation from Water
Cheng Liu
2015-01-01
Full Text Available Removal performance of arsenic in water by a novel magnetic chitosan nanoparticle (MCNP with a diameter of about 10 nm, including adsorption kinetics, adsorption isotherm, main influencing factors, and regeneration effects, was investigated. In addition, the effective separation way for MCNP particles and the new application mode were developed to prompt the application of MCNP. The results showed that MCNP exhibited excellent ability to remove As(V and As(III from water in a wide range of initial concentrations, MCNP removed arsenic rapidly with more than 95% of arsenic adsorbed in initial 15 min, and the whole process fitted well to the pseudo-second-order model. The Langmuir model fits the equilibrium data better than the Freundlich isotherm model and the maximum adsorption capacities of As(V and As(III were 65.5 mg/g and 60.2 mg/g, respectively. The saturated MCNP could be easily regenerated and kept more than 95% of initial adsorption capacity stable after 10 regeneration cycles. A new magnetic material separation method was established to separate MCNP effectively. The continuous-operation instrument developed based on the MCNP could operate stably and guarantee that the concentration of arsenic meets the guideline limit of arsenic in drinking water regulated by the WHO.
Maijó, Irene; Borrull, Francesc; Aguilar, Carme; Calull, Marta
2013-02-01
Several strategies, namely, large volume sample stacking (LVSS), field-amplified sample injection (FASI), sweeping, and in-line SPE-CE, were investigated for the simultaneous separation and preconcentration of a group of parabens. A BGE consisting of 20 mM sodium dihydrogenphosphate (pH 2.28) and 150 mM SDS with 15% ACN was used for the separation and preconcentration of the compounds by sweeping, and a BGE consisting of 30 mM sodium borate (pH 9.5) was used for the separation and preconcentration of the compounds by LVSS, FASI, and in-line SPE-CE. Several factors affecting the preconcentration process were investigated in order to obtain the maximum enhancement of sensitivity. The LODs obtained for parabens were in the range of 18-27, 3-4, 2, and 0.01-0.02 ng/mL, and the sensitivity evaluated in terms of LODs was improved up to 29-, 77-, 120-, and 18,400-fold for sweeping, LVSS, FASI, and in-line SPE-CE, respectively. These preconcentration techniques showed potential as good strategies for focusing parabens. The four methods were validated with standard samples to show the potential of these techniques for future applications in real samples, such as biological and environmental samples.
tmle : An R Package for Targeted Maximum Likelihood Estimation
Susan Gruber
2012-11-01
Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.
Gas Separation in the Ranque-Hilsch Vortex tube
Linderstrøm-Lang, C. U.
1964-01-01
The gas separation taking place in the vortex tube is studied in detail. Both enrichment and depletion of a given component in any one of the two resultant streams may take place; the sign of this separation effect depends on certain parameters, notably the hot to cold flow ratio. A comparison...... of the data shows how the pattern of the effect curve, i.e. the separation effect as a function of hot flow fraction, varies with constructional parameters. Among these the ratio of the diameters of the two orifices through which the gas escapes from the tube, is of paramount importance. Also their magnitude...... relative to the tube diameter has a distinct modifying effect. The separation ability as a function of the tube length has a maximum at quite short lengths, dependent, however, on the inlet jet diameter in such a way that an increase in this causes an increase in the optimal length. The conclusion...
Suligowski, Roman
2014-05-01
Probable Maximum Precipitation based upon the physical mechanisms of precipitation formation at the Kielce Upland. This estimation stems from meteorological analysis of extremely high precipitation events, which occurred in the area between 1961 and 2007 causing serious flooding from rivers that drain the entire Kielce Upland. Meteorological situation has been assessed drawing on the synoptic maps, baric topography charts, satellite and radar images as well as the results of meteorological observations derived from surface weather observation stations. Most significant elements of this research include the comparison between distinctive synoptic situations over Europe and subsequent determination of typical rainfall generating mechanism. This allows the author to identify the source areas of air masses responsible for extremely high precipitation at the Kielce Upland. Analysis of the meteorological situations showed, that the source areas for humid air masses which cause the largest rainfalls at the Kielce Upland are the area of northern Adriatic Sea and the north-eastern coast of the Black Sea. Flood hazard at the Kielce Upland catchments was triggered by daily precipitation of over 60 mm. The highest representative dew point temperature in source areas of warm air masses (these responsible for high precipitation at the Kielce Upland) exceeded 20 degrees Celsius with a maximum of 24.9 degrees Celsius while precipitable water amounted to 80 mm. The value of precipitable water is also used for computation of factors featuring the system, namely the mass transformation factor and the system effectiveness factor. The mass transformation factor is computed based on precipitable water in the feeding mass and precipitable water in the source area. The system effectiveness factor (as the indicator of the maximum inflow velocity and the maximum velocity in the zone of front or ascending currents, forced by orography) is computed from the quotient of precipitable water in
Integer Programming Model for Maximum Clique in Graph
YUAN Xi-bo; YANG You; ZENG Xin-hai
2005-01-01
The maximum clique or maximum independent set of graph is a classical problem in graph theory. Combined with Boolean algebra and integer programming, two integer programming models for maximum clique problem,which improve the old results were designed in this paper. Then, the programming model for maximum independent set is a corollary of the main results. These two models can be easily applied to computer algorithm and software, and suitable for graphs of any scale. Finally the models are presented as Lingo algorithms, verified and compared by several examples.
Counterexamples to convergence theorem of maximum-entropy clustering algorithm
于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生
2003-01-01
In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.
Segal, Nancy L
2014-12-01
There is a lack of research findings addressing the unique college admissions issues faced by twins and other multiples. The advantages and disadvantage twins face, as reported by college administrators, twins and families are reviewed. Next, recent research addressing twins' birth weight and neuromotor performance, transfusion syndrome markers, the vanishing twin syndrome and monozygotic (MZ) twin discordance for Wilson's disease is described. News items concerning the birth of unusually large twins, the planned separation of conjoined twins, twin participants in the X Factor games and a film, The Identical, are also summarized.
Jan Werner
Full Text Available We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes strongly differed from Case's study (1978, which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles to 20 (fishes times (in comparison to mammals or even 45 (reptiles to 100 (fishes times (in comparison to birds lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule
Werner, Jan; Griebeler, Eva Maria
2014-01-01
We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles) to 20 (fishes) times (in comparison to mammals) or even 45 (reptiles) to 100 (fishes) times (in comparison to birds) lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule out either of
Microfluidic immunomagnetic separation for enhanced bacterial detection
Hoyland, James; Kunstmann-Olsen, Casper; Ahmed, Shakil
A combined lab-on-a-chip approach combining immunomagnetic separation (IMS) and flow cytometry was developed for the enrichment and detection of salmonella contamination in food samples. Immunomagnetic beads were immobilized in chips consisting of long fractal meanders while contaminated samples...... to obtain maximum capturing efficiency. The effects of channel volume, path length and number of bends of microfluidic chip on IMS efficiency were also determined....... were flowed over them. After incubation the beads can be released for detection into the flow-cytometry chip. Immunomagnetic beads were prepared by using anti-Salmonella antibodies and magnetic beads (2.8μm). Both the synthesized and commercially available anti-Salmonella beads were used to capture...
Phase separation during radiation crosslinking of unsaturated polyester resin
Pucić, Irina; Ranogajec, Franjo
2003-06-01
Phase separation during radiation-initiated crosslinking of unsaturated polyester resin was studied. Residual reactivity of liquid phases and gels of partially cured samples was determined by DSC. Uncured resin and liquid phases showed double reaction exotherm, gels had a single maximum that corresponded to higher-temperature maximum of liquid parts. The lower-temperature process was attributed to styrene-polyester copolymerization. At higher temperatures, polyester unsaturations that remained unreacted due to microgel formation homopolymerized. FTIR revealed different composition of phases. In thicker samples, reaction heat influenced microgel formation causing delayed appearance of gel and faster increase in conversion.
Parental separation and pediatric cancer
Grant, Sally; Carlsen, Kathrine; Bidstrup, Pernille Envold Hansen
2012-01-01
The purpose of this study was to determine the risk for separation (ending cohabitation) of the parents of a child with a diagnosis of cancer.......The purpose of this study was to determine the risk for separation (ending cohabitation) of the parents of a child with a diagnosis of cancer....
Fast Monaural Separation of Speech
Pontoppidan, Niels Henrik; Dyrholm, Mads
2003-01-01
a Factorial Hidden Markov Model, with non-stationary assumptions on the source autocorrelations modelled through the Factorial Hidden Markov Model, leads to separation in the monaural case. By extending Hansens work we find that Roweis' assumptions are necessary for monaural speech separation. Furthermore we...
Metals Separation by Liquid Extraction.
Malmary, G.; And Others
1984-01-01
As part of a project focusing on techniques in industrial chemistry, students carry out experiments on separating copper from cobalt in chloride-containing aqueous solution by liquid extraction with triisoctylamine solvent and search the literature on the separation process of these metals. These experiments and the literature research are…
Vision 2020: 2000 Separations Roadmap
Adler, Stephen [Center for Waster Reduction Technologies; Beaver, Earl [Practical Sustainability; Bryan, Paul [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Robinson, Sharon [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Watson, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2000-01-01
This report documents the results of four workshops on the technology barriers, research needs, and priorities of the chemical, agricultural, petroleum, and pharmaceutical industries as they relate to separation technologies utilizing adsorbents, crystallization, distillation, extraction, membranes, separative reactors, ion exchange, bioseparations, and dilute solutions.
Electrostatically enhanced core separator system
Easom, B.H.; Smolensky, L.A.; Altman, R.F. [LSR Technologies, Inc., Acton, MA (United States)
1997-12-31
The Electrostatically Enhanced Core Separator (EECS) system employs the same design principles as the mechanical Core Separator system plus an electrostatic separation enhancing technique. The EECS system contains a special type of separator, the EECS element, a conventional solids collector and means for flow recirculation. In the EECS system solids separation and collection are accomplished in two different components. The EECS element acts as a separator, not as a collector so particles are not collected on its walls. This eliminates or at least mitigates the problems associated with reentrainment (due to high or low dust resistivity), seepage (due to gas flow below the precipitator plates and over the hoppers), sneakage (due to gas flow both above and below the precipitator plates), and rapping reentrainment. If the EECS separation efficiency is high enough, particles cannot leave the system with the process stream. They recirculate until they are extracted by the collector. As a result, the separation efficiency of the EECS element determines the efficiency of the system, even if the collector efficiency is relatively low. 8 refs., 3 figs.
Relational Parametricity and Separation Logic
Birkedal, Lars; Yang, Hongseok
2008-01-01
Separation logic is a recent extension of Hoare logic for reasoning about programs with references to shared mutable data structures. In this paper, we provide a new interpretation of the logic for a programming language with higher types. Our interpretation is based on Reynolds's relational...... parametricity, and it provides a formal connection between separation logic and data abstraction. Udgivelsesdato: 2008...
Analysis of uranium isotope separation by redox chromatography
Fujine, S.; Naruse, Y.; Shiba, K.
1983-09-01
Uranium isotope separation by redox chromatography is analytically studied. The periodic withdrawal of products and tails and the introduction of natural feed are simulated on the assumption of a square cascade for a uranium adsorption band. The influences on the separative power and the lead time until product withdrawal are investigated by varying the magnitude of the isotope separation factor, uranium band length, tails concentration, and so on. Simulating calculations indicate that using ion-exchange resins to achieve uranium isotope separation requires a very long lead time for the production of highly enriched uranium.
Maximum detection range limitation of pulse laser radar with Geiger-mode avalanche photodiode array
Luo, Hanjun; Xu, Benlian; Xu, Huigang; Chen, Jingbo; Fu, Yadan
2015-05-01
When designing and evaluating the performance of laser radar system, maximum detection range achievable is an essential parameter. The purpose of this paper is to propose a theoretical model of maximum detection range for simulating the Geiger-mode laser radar's ranging performance. Based on the laser radar equation and the requirement of the minimum acceptable detection probability, and assuming the primary electrons triggered by the echo photons obey Poisson statistics, the maximum range theoretical model is established. By using the system design parameters, the influence of five main factors, namely emitted pulse energy, noise, echo position, atmospheric attenuation coefficient, and target reflectivity on the maximum detection range are investigated. The results show that stronger emitted pulse energy, lower noise level, more front echo position in the range gate, higher atmospheric attenuation coefficient, and higher target reflectivity can result in greater maximum detection range. It is also shown that it's important to select the minimum acceptable detection probability, which is equivalent to the system signal-to-noise ratio for producing greater maximum detection range and lower false-alarm probability.
Kim, Leonard, E-mail: kimlh@umdnj.edu [Department of Radiation Oncology, Cancer Institute of New Jersey, Robert Wood Johnson Medical School, University of Medicine and Dentistry of New Jersey, New Brunswick, NJ (United States); Narra, Venkat; Yue, Ning [Department of Radiation Oncology, Cancer Institute of New Jersey, Robert Wood Johnson Medical School, University of Medicine and Dentistry of New Jersey, New Brunswick, NJ (United States)
2013-07-01
Recent studies have reported potentially clinically meaningful dose differences when heterogeneity correction is used in breast balloon brachytherapy. In this study, we report on the relationship between heterogeneity-corrected and -uncorrected doses for 2 commonly used plan evaluation metrics: maximum point dose to skin surface and maximum point dose to ribs. Maximum point doses to skin surface and ribs were calculated using TG-43 and Varian Acuros for 20 patients treated with breast balloon brachytherapy. The results were plotted against each other and fit with a zero-intercept line. Max skin dose (Acuros) = max skin dose (TG-43) ⁎ 0.930 (R{sup 2} = 0.995). The average magnitude of difference from this relationship was 1.1% (max 2.8%). Max rib dose (Acuros) = max rib dose (TG-43) ⁎ 0.955 (R{sup 2} = 0.9995). The average magnitude of difference from this relationship was 0.7% (max 1.6%). Heterogeneity-corrected maximum point doses to the skin surface and ribs were proportional to TG-43-calculated doses. The average deviation from proportionality was 1%. The proportional relationship suggests that a different metric other than maximum point dose may be needed to obtain a clinical advantage from heterogeneity correction. Alternatively, if maximum point dose continues to be used in recommended limits while incorporating heterogeneity correction, institutions without this capability may be able to accurately estimate these doses by use of a scaling factor.
Kim, Leonard; Narra, Venkat; Yue, Ning
2013-01-01
Recent studies have reported potentially clinically meaningful dose differences when heterogeneity correction is used in breast balloon brachytherapy. In this study, we report on the relationship between heterogeneity-corrected and -uncorrected doses for 2 commonly used plan evaluation metrics: maximum point dose to skin surface and maximum point dose to ribs. Maximum point doses to skin surface and ribs were calculated using TG-43 and Varian Acuros for 20 patients treated with breast balloon brachytherapy. The results were plotted against each other and fit with a zero-intercept line. Max skin dose (Acuros) = max skin dose (TG-43) * 0.930 (R(2) = 0.995). The average magnitude of difference from this relationship was 1.1% (max 2.8%). Max rib dose (Acuros) = max rib dose (TG-43) * 0.955 (R(2) = 0.9995). The average magnitude of difference from this relationship was 0.7% (max 1.6%). Heterogeneity-corrected maximum point doses to the skin surface and ribs were proportional to TG-43-calculated doses. The average deviation from proportionality was 1%. The proportional relationship suggests that a different metric other than maximum point dose may be needed to obtain a clinical advantage from heterogeneity correction. Alternatively, if maximum point dose continues to be used in recommended limits while incorporating heterogeneity correction, institutions without this capability may be able to accurately estimate these doses by use of a scaling factor.
Testing Orions Fairing Separation System
Martinez, Henry; Cloutier, Chris; Lemmon, Heber; Rakes, Daniel; Oldham, Joe; Schlagel, Keith
2014-01-01
Traditional fairing systems are designed to fully encapsulate and protect their payload from the harsh ascent environment including acoustic vibrations, aerodynamic forces and heating. The Orion fairing separation system performs this function and more by also sharing approximately half of the vehicle structural load during ascent. This load-share condition through launch and during jettison allows for a substantial increase in mass to orbit. A series of component-level development tests were completed to evaluate and characterize each component within Orion's unique fairing separation system. Two full-scale separation tests were performed to verify system-level functionality and provide verification data. This paper summarizes the fairing spring, Pyramidal Separation Mechanism and forward seal system component-level development tests, system-level separation tests, and lessons learned.
Modelling non-stationary annual maximum flood heights in the lower Limpopo River basin of Mozambique
Daniel Maposa
2016-03-01
Full Text Available In this article we fit a time-dependent generalised extreme value (GEV distribution to annual maximum flood heights at three sites: Chokwe, Sicacate and Combomune in the lower Limpopo River basin of Mozambique. A GEV distribution is fitted to six annual maximum time series models at each site, namely: annual daily maximum (AM1, annual 2-day maximum (AM2, annual 5-day maximum (AM5, annual 7-day maximum (AM7, annual 10-day maximum (AM10 and annual 30-day maximum (AM30. Non-stationary time-dependent GEV models with a linear trend in location and scale parameters are considered in this study. The results show lack of sufficient evidence to indicate a linear trend in the location parameter at all three sites. On the other hand, the findings in this study reveal strong evidence of the existence of a linear trend in the scale parameter at Combomune and Sicacate, whilst the scale parameter had no significant linear trend at Chokwe. Further investigation in this study also reveals that the location parameter at Sicacate can be modelled by a nonlinear quadratic trend; however, the complexity of the overall model is not worthwhile in fit over a time-homogeneous model. This study shows the importance of extending the time-homogeneous GEV model to incorporate climate change factors such as trend in the lower Limpopo River basin, particularly in this era of global warming and a changing climate.Keywords: nonstationary extremes; annual maxima; lower Limpopo River; generalised extreme value
梁玲娟; 徐凌燕; 陈萧萧
2015-01-01
目的:分析台州市某医院分离转换性障碍患者述情障碍的影响因素.方法:采用多伦多述情障碍量表对226例分离转换性障碍患者进行问卷调查,采用单因素分析和Logistic多元回归分析法探讨分离转换性障碍患者述情障碍的影响因素.结果:分离转换性障碍患者述情障碍发生率为40.71%,述情障碍总分为(63.53±13.04)分.述情障碍组和非述情障碍组年龄、性格、人格特质、抑郁、焦虑比例比较,差异有统计学意义(P<0.05).Logistic多元回归分析显示,人格特质、性格、抑郁和焦虑为分离转换性障碍患者述情障碍的独立危险因素.结论:述情障碍在分离转换性障碍患者中常见,受人格特质、性格、抑郁和焦虑等因素影响,医护人员可将这些影响因素作为为突破口进行临床干预.%Objective:To explore the influencing factors of alexithymia in patients with separation and conversion disorder .Methods:About 226 patients with separation and conversion disorder were asked to fill the Toronto Alexithymia Scale .The single factor analysis and Logistic re-gression analysis were used to explore the factors of alexithymia in patients with separation and conversion disorder.Results: The incidence of alexithymia was 40.71%.The total score of alexithymia was (63.53 ±13.04).There is significant difference in age, character, personality, de-pression, anxiety in the alexithymia group and the non-alexithymia group(P<0.05).Logistic regression analysis showed that the personality, character, depression, anxiety were the independent risk factors of alexithymia in patients with separation and conversion disorder(P <0.05). Conclusion: The alexithymia was very common in patients with separation and conversion disorder, and it was influenced by many factors.The medical staff could start from the factors to improve the condition of alexithymia .
Novel blind source separation algorithm using Gaussian mixture density function
孔薇; 杨杰; 周越
2004-01-01
The blind source separation (BSS) is an important task for numerous applications in signal processing, communications and array processing. But for many complex sources blind separation algorithms are not efficient because the probability distribution of the sources cannot be estimated accurately. So in this paper, to justify the ME(maximum enteropy) approach, the relation between the ME and the MMI(minimum mutual information) is elucidated first. Then a novel algorithm that uses Gaussian mixture density to approximate the probability distribution of the sources is presented based on the ME approach. The experiment of the BSS of ship-radiated noise demonstrates that the proposed algorithm is valid and efficient.
When do evolutionary algorithms optimize separable functions in parallel?
Doerr, Benjamin; Sudholt, Dirk; Witt, Carsten
2013-01-01
is that evolutionary algorithms make progress on all subfunctions in parallel, so that optimizing a separable function does not take not much longer than optimizing the hardest subfunction-subfunctions are optimized "in parallel." We show that this is only partially true, already for the simple (1+1) evolutionary...... algorithm ((1+1) EA). For separable functions composed of k Boolean functions indeed the optimization time is the maximum optimization time of these functions times a small O(log k) overhead. More generally, for sums of weighted subfunctions that each attain non-negative integer values less than r = o(log1...
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...
49 CFR 174.86 - Maximum allowable operating speed.
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15...
Parametric optimization of thermoelectric elements footprint for maximum power generation
Rezania, A.; Rosendahl, Lasse; Yin, Hao
2014-01-01
The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost-perform...
30 CFR 56.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 56.19066 Section 56.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 56.19066 Maximum riders in a conveyance. In shafts inclined over 45...